Abstract: Disclosed is a system 100 to conduct a survey of a road that includes a plurality of imaging units (104) disposed on a vehicle moving on the road, at least one positioning unit (106) disposed on the vehicle, a user device (102) configured to provide data corresponding to a geofence region, and processing circuity (112) coupled to each other. The processing circuity (112) is configured to determine a start point and an end point of the geofence region and identify one or more anomalies present on the geofence region via an AI/ML technique. The processing circuity (112) is configured to synchronize framewise time of the plurality of imaging units with the real time geographical location of the vehicle via a Network Time Protocol (NTP) technique. The processing circuity (112) is configured to generate a processed video wherein the location of each anomaly of the one or more anomalies are indicated. FIG. 1 is the reference figure.
DESC:TECHNICAL FIELD
The present disclosure relates to road monitoring and more particularly the present disclosure relates to a system and a method for surveying road.
BACKGROUND
Roads play a critical role in facilitating transportation and commerce, underscoring the importance of maintaining them in optimal condition to ensure safety, reduce vehicle wear and tear, and enhance overall efficiency. Regular assessment of road conditions is indispensable for identifying repair needs, streamlining traffic flow, and bolstering road safety. However, traditional methods of surveying road conditions predominantly rely on manual inspections or specialized vehicles outfitted with sensors and cameras, presenting several inherent limitations.
Manual inspections demand significant human resources and time, rendering the surveying of extensive road networks economically impractical. Furthermore, the subjective nature of human observation introduces biases, leading to inconsistencies and inaccuracies in data collection and analysis. Safety concerns also arise from manual or large vehicle-based surveying, posing risks to both surveyors and road users, particularly in challenging-to-access areas such as steep slopes or congested intersections. Additionally, conventional methods often struggle to achieve comprehensive coverage of road networks, especially in remote or inaccessible regions, further complicating maintenance efforts.
The specialized vehicles that are present in the current state of art for surveying the road including pavement assessment vehicles (PAVs) and/or network survey vehicles (NSVs) are limited in numbers and are costly to mobilize. Further, the aforementioned specialized vehicles are costly to mobilize for surveying the roads. Further, the aforementioned specialized vehicles are designed for a certain type of roads such as highways and urban roads and are not suitable to be mobilized for surveying the rural roads.
Therefore, there is a need to provide an efficient, cost effective, automated and accurate system and method for road surveying that is capable of solving aforementioned problems of the conventional road surveying and/or inspection techniques.
SUMMARY
In view of the foregoing, a system to conduct a survey of a road is disclosed. The system includes a plurality of imaging units, at least one positioning unit, a user device, and processing circuity coupled to each other. The plurality of imaging units are disposed on a vehicle moving on the road and configured to capture a plurality of images and videos of the road. The at least one positioning unit is disposed on the vehicle and configured to detect a real time geographical location of the vehicle. The user device is configured to enable a user to provide data corresponding to a geofence region such that the geofence region represents a region of right of way (ROW) of the road to be surveyed. The processing circuitry is configured to determine a start point and an end point of the geofence region when the vehicle has reached to the geofence region based on the real time geographical location of the vehicle and the data. The processing circuitry is further configured to identify one or more anomalies present on the geofence region from the plurality of images and videos by way of an AI/ML technique related to image pattern design detection. The processing circuitry is further configured to synchronize framewise time of the plurality of imaging units with the real time geographical location of the vehicle by way of a Network Time Protocol (NTP) technique. The processing circuitry is further configured to generate a processed video wherein the location of each anomaly of the one or more anomalies are indicated.
In some embodiments of the present disclosure, the processing circuitry is further configured to monitor the real time geographical location of the vehicle with in the geofence region and enable the plurality of imaging units to simultaneously capture the plurality of images and videos of the geofence region from the start point to the end point of the geofence region when the vehicle has reached to the start point of geofence region and enable the plurality of imaging units to be on a standby mode when the vehicle has not reached to the start point of geofence region.
In some embodiments of the present disclosure, the at least one positioning unit comprising at least one of a global positioning system (GPS), a global information system (GIS), a distance measurement instruments (DMIs), or an inertial measurement unit (IMUs).
In some embodiments of the present disclosure, the processing circuitry is further configured to synchronize a time of the plurality of imaging units by way of the NTP to stitch a plurality of frames of a plurality of videos from the plurality of images and videos.
In some embodiments of the present disclosure, the processing circuitry is further configured to determine dimensions of the one or more anomalies by way of the AI/ML technique.
In some embodiments of the present disclosure, the processing circuitry is further configured to de-duplicate the one or more anomalies that are present in more than one frame of the plurality of frames.
In some embodiments of the present disclosure, the processing circuitry is further configured to transmit the processed video to the user device.
In another aspect of the present disclosure, a method for conducting a survey of a road is disclosed. The method includes a step of receiving data corresponding to a geofence region wherein the geofence region represents a region right of way (ROW) of the road to be surveyed by way of a user device. The method further includes a step of determining a start point and an end point of the geofence region when the vehicle has reached to the geofence region based on a real time geographical location of the vehicle and the data corresponding to the geofence region by way of processing circuitry that is coupled to the user device and at least one positioning unit configured to detect the real time geographical location of the vehicle. The method further includes the step of monitoring the real time geographical location of the vehicle within the geofence region along with enabling the plurality of imaging units (104) to simultaneously capture the plurality of images and videos of the geofence region from the start point to the end point of the geofence region when the vehicle has reached to the start point of geofence region, and enabling the plurality of imaging units (104) to be on a standby mode when the vehicle has not reached to the start point of geofence region, by way of the processing circuitry. The method further includes a step of identifying one or more anomalies present on the geofence region from the plurality of images and videos via an AI/ML technique related to image pattern design detection by way of the processing circuitry. The method further includes a step of synchronizing framewise time of the plurality of imaging units with the real time geographical location of the vehicle via a Network Time Protocol (NTP) technique by way of the processing circuitry. The method further includes a step of synchronizing a time of the plurality of imaging units via the NTP for stitching a plurality of frames of the plurality of videos by way of the processing circuitry. The method further includes a step of de-duplicating the one or more anomalies that are present in more than one frame of the plurality of frames by way of the processing circuitry. The method further includes a step of determining dimensions of the one or more anomalies via the AI/ML technique. The method further includes a step of generating a processed video wherein the location of each anomaly of the one or more anomalies are indicated by way of the processing circuitry.
In some embodiments of the present disclosure, the method further includes a step of transmitting the processed video to the user device by way of the processing circuitry.
BRIEF DESCRIPTION OF DRAWINGS
The above and still further features and advantages of embodiments of the present invention becomes apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
FIG. 1 illustrates a block diagram of a system to conduct a survey of a road, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a block diagram of a data processing apparatus of FIG. 1, in accordance with an exemplary embodiment of the present disclosure; and
FIG. 3A-3B illustrates a flowchart depicting a method for conducting a survey of a road, in accordance with an embodiment of the present disclosure.
To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures.
DETAILED DESCRIPTION
Various embodiments of the present disclosure provide a system and a method to conduct a survey of a road. The following description provides specific details of certain embodiments of the disclosure illustrated in the drawings to provide a thorough understanding of those embodiments. It should be recognized, however, that the present disclosure can be reflected in additional embodiments and the disclosure may be practiced without some of the details in the following description.
The various embodiments including the example embodiments are now described more fully with reference to the accompanying drawings, in which the various embodiments of the disclosure are shown. The disclosure may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure is thorough and complete, and fully conveys the scope of the disclosure to those skilled in the art. In the drawings, the sizes of components may be exaggerated for clarity.
It is understood that when an element or layer is referred to as being “on,” “connected to,” or “coupled to” another element or layer, it can be directly on, connected to, or coupled to the other element or layer or intervening elements or layers that may be present. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
Embodiments described herein refer to plan views and/or cross-sectional views by way of ideal schematic views. Accordingly, the views may be modified depending on simplistic assembling or manufacturing technologies and/or tolerances. Therefore, example embodiments are not limited to those shown in the views but include modifications in configurations formed on basis of assembling process. Therefore, regions exemplified in the figures have schematic properties and shapes of regions shown in the figures exemplify specific shapes or regions of elements, and do not limit the various embodiments including the example embodiments.
As mentioned there remains a need to efficiently conduct a survey of a road. Accordingly, the present disclosure provides a system and a method for conducting the survey of the road based on images captured by the system in real time.
FIG. 1 illustrates a block diagram of system 100 to conduct the survey of the road (hereinafter referred to and designated as “the system 100”), in accordance with an embodiment of the present disclosure. Specifically, the system 100 may be configured to facilitate a user to initiate road survey. Further, the system 100 may be configured to detect one or more anomalies associated with a road while a vehicle moves on a road. The system 100 may be configured to detect one or more anomalies of the road, in real time, based on movement of the vehicle. The system 100 may be configured to detect the one or more anomalies of the road even at a location where Global Position System (GPS) is not present or jammed.
The system 100 may include a user device 102, a plurality of imaging units 104 (hereinafter “the imaging units 104), at least one positioning unit 106 (hereinafter “the positioning unit 106”), and a data processing apparatus 110. The user device 102, the imaging units 104, the positioning unit 106 and the data processing apparatus 110 may be communicatively coupled to each other by way of a communication channel 108.
The user device 102 may be configured to facilitate a user to input data, receive data, and/or transmit data within the system 100. Specifically, the user device may be configured to allow the user to provide data corresponding to a geofence region. The geofence region may represents a region of right of way (ROW) of the road to be surveyed. Particularly, right of way (ROW) is the width of land acquired for the road, along its alignment. In some embodiments of the present disclosure, the ROW may include a main carriageway and an off-carriage way. The off-carriage way may further include service lane, shoulders, kerb and margin.
Examples of the user device 102 may include, but are not limited to, a smart phone, a desktop, a notebook, a laptop, a handheld computer, a touch sensitive device, a computing device and/or a smart watch. Aspects of the present disclosure are intended to include or otherwise cover any type of the user device 102 including known, related art, and/or later developed technologies, without deviating from the scope of the present disclosure. Although FIG. 1 illustrates that the system 100 includes a single user device (i.e., the user device 102), it will be apparent to a person skilled in the art that the scope of the present disclosure is not limited to it. In various other aspects, the system 100 may include multiple user devices without deviating from the scope of the present disclosure. In such a scenario, each user device is configured to perform one or more operations in a manner similar to the operations of the user device 102 as described herein. As illustrated, the user device 102 may include a user interface 116, a processing unit 118, a user device memory 120, and a communication interface 122.
In some embodiments of the present disclosure, the user interface 116 may include an input interface for receiving one or more inputs from the user. Examples of the input interface may include, but are not limited to, a touch interface, a mouse, a keyboard, a motion recognition unit, a gesture recognition unit, a voice recognition unit, or the like. Aspects of the present disclosure are intended to include or otherwise cover any type of the input interface including known, related art, and/or later developed technologies. The user interface 116 may further include an output interface for displaying (or presenting) one or more outputs to the user. Examples of the output interface may include, but are not limited to, a digital display, an analog display, a touch screen display, a graphical user interface, a website, a webpage, a keyboard, a mouse, a light pen, an appearance of a desktop, and/or illuminated characters. Aspects of the present disclosure are intended to include and/or otherwise cover any type of the output interface including known and/or related, or later developed technologies.
The processing unit 118 may include suitable logic, instructions, circuitry, interfaces, and/or codes for executing various operations, such as the operations associated with various operations of the user device 102, and the like. Further, the processing unit 118 may be configured to control one or more operations executed by the user device 102 in response to the one or more inputs received from the user through the user interface 116. Examples of the processing unit 118 may include, but are not limited to, an application-specific integrated circuit (ASIC) processor, a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a field-programmable gate array (FPGA), a Programmable Logic Control unit (PLC), and the like. Aspects of the present disclosure are intended to include or otherwise cover any type of processing unit including known, related art, and/or later developed processing units.
The user device memory 120 may be configured to store the logic, instructions, circuitry, interfaces, and/or codes of the processing unit 118, data associated with the user device 102, and/or data associated with the system 100. In some embodiment of the present disclosure, the user device memory 120 may be configured to store the data corresponding to a geofence region. Examples of the user device memory 120 may include, but are not limited to, a Read-Only Memory (ROM), a Random-Access Memory (RAM), a flash memory, a removable storage drive, a hard disk drive (HDD), a solid-state memory, a magnetic storage drive, a Programmable Read Only Memory (PROM), an Erasable PROM (EPROM), and/or an Electrically EPROM (EEPROM). Aspects of the present disclosure are intended to include or otherwise cover any type of device memory including known, related art, and/or later developed memories.
The communication interface 122 may be configured to enable the user device 102 to communicate with the data processing apparatus 110 and other components of the system 100 over the communication network 108. Examples of the communication interface 122 may include, but are not limited to, a modem, a network interface such as an Ethernet Card, a communication port, and/or a Personal Computer Memory Card International Association (PCMCIA) slot and card, an antenna, a Radio Frequency (RF) transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a Coder Decoder (CODEC) Chipset, a Subscriber Identity Module (SIM) card, and a local buffer circuit. It will be apparent to a person of ordinary skill in the art that the communication interface 108 may have any device and/or apparatus capable of providing wireless and/or wired communications between the user device 102 and the data processing apparatus 110.
The imaging units 104 may be disposed on a vehicle (not shown). The imaging units 104 may be configured to capture a plurality of images and videos. Specifically, the imaging units 104 may be configured to capture a plurality of images and videos when the vehicle has reached to the geofence region. Examples of the imaging unit 104 may include, but are not limited to, digital cameras, smartphone cameras, webcams, thermal imaging cameras, Lidar, and/or scanners. Aspects of the present disclosure are intended to include or otherwise cover any type of the imaging unit 104 including known, related art, and/or later developed technologies, without deviating from the scope of the present disclosure.
The positioning unit 106 may be disposed on the vehicle. The positioning unit 106 may be configured to detect a real time location of the vehicle. Specifically, the positioning unit 106 may be configured to detect the real time location of the vehicle even when Global Positioning System (GPS) is not present or jammed by way of global information system (GIS), distance measurement instruments (DMIs), inertial measurement units (IMUs), or any combination thereof. Examples of the positioning unit 106 may include, but are not limited to, Global Positioning System, Global Information System, Distance Measurement Instruments, Inertial measurement units, Satellite-Based Augmentation Systems, Lidar-based Positioning and the like. Aspects of the present disclosure are intended to include or otherwise cover any type of the positioning unit 106 including known, related art, and/or later developed technologies, without deviating from the scope of the present disclosure.
The data processing apparatus 110 may be a network of computers, a framework, or a combination thereof, that may provide a generalized approach to create a server implementation. In some embodiments of the present disclosure, the data processing apparatus 110 may be a server. Examples of the data processing apparatus 110 may include, but are not limited to, personal computers, laptops, mini-computers, mainframe computers, any non-transient and tangible machine that can execute a machine-readable code, cloud-based servers, distributed server networks, or a network of computer systems. The data processing apparatus 110 may be realized through various web-based technologies such as, but not limited to, a Java web-framework, a .NET framework, a personal home page (PHP) framework, or any other web-application framework. The data processing apparatus 110 may include one or more processing circuitries of which processing circuitry 112 is shown and a database 114.
The processing circuitry 112 may be configured to execute various operations associated with the system 100. The processing circuitry 112 may be coupled to the database 114 and configured to execute the one or more operations associated with the system 100 by communicating one or more commands and/or instructions over the communication network 108. Examples of the processing circuitry 112 may include, but are not limited to, an ASIC processor, a RISC processor, a CISC processor, a FPGA, and the like. Embodiments of the present disclosure are intended to include and/or otherwise cover any type of the processing circuitry 112 including known, related art, and/or later developed technologies.
The database 114 may be configured to store the logic, instructions, circuitry, interfaces, and/or codes of the processing circuitry 112 for executing various operations. The database 114 may be further configured to store therein, data associated with the user device 102. It will be apparent to a person having ordinary skill in the art that the database 114 may be configured to store various types of data associated with the system 100, without deviating from the scope of the present disclosure. Examples of the database 114 may include but are not limited to, a Relational database, a NoSQL database, a Cloud database, an Object-oriented database, and the like. Further, the database 114 may include associated memories that may include, but is not limited to, a ROM, a RAM, a flash memory, a removable storage drive, a HDD, a solid-state memory, a magnetic storage drive, a PROM, an EPROM, and/or an EEPROM. Embodiments of the present disclosure are intended to include or otherwise cover any type of the database 114 including known, related art, and/or later developed technologies.
The communication network 108 may include suitable logic, circuitry, and interfaces that may be configured to provide a plurality of network ports and a plurality of communication channels for transmission and reception of data related to operations of various entities (such as the user device 102 and the data processing apparatus 110) of the system 100. Each network port may correspond to a virtual address (or a physical machine address) for transmission and reception of the communication data. For example, the virtual address may be an Internet Protocol Version 4 (IPV4) (or an IPV6 address) and the physical address may be a Media Access Control (MAC) address. The communication network 110 may be associated with an application layer for implementation of communication protocols based on one or more communication requests from the user device 102 and the data processing apparatus 110. The communication data may be transmitted or received, via the communication protocols. Examples of the communication protocols may include, but are not limited to, Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Domain Network System (DNS) protocol, Common Management Interface Protocol (CMIP), Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Long Term Evolution (LTE) communication protocols, or any combination thereof.
In an aspect of the present disclosure, the communication data may be transmitted or received via at least one communication channel of a plurality of communication channels in the communication network 108. The communication channels may include, but are not limited to, a wireless channel, a wired channel, a combination of wireless and wired channel thereof. The wireless or wired channel may be associated with a data standard which may be defined by one of a Local Area Network (LAN), a Personal Area Network (PAN), a Wireless Local Area Network (WLAN), a Wireless Sensor Network (WSN), Wireless Area Network (WAN), Wireless Wide Area Network (WWAN), a metropolitan area network (MAN), a satellite network, the Internet, a fiber optic network, a coaxial cable network, an infrared (IR) network, a radio frequency (RF) network, and a combination thereof. Aspects of the present disclosure are intended to include or otherwise cover any type of communication channel, including known, related art, and/or later developed technologies.
In operation, when the system 100 receives the data corresponding to the geofence region, by way of the user device 102, the system starts to monitor the real time geographical location of the vehicle by way of the positioning unit 104. Further, the system 100 by way the processing unit 112 determines a start point and an end point of the geofence region when the vehicle has reached to the geofence region based on the real time geographical location of the vehicle and the data corresponding to the geofence region. Further, when the system 100 determines by way of the processing circuitry 112 that the vehicle has reached to the start geofence region then the system 100 start to monitor the real time geographical location of the vehicle with in the geofence region and enable the plurality of imaging units 104 to capture the plurality of images and videos of the geofence region. However, when the system 100 determines that the vehicle has not reached to the geofence region then enable the plurality of imaging units 104 to be on a standby mode. Further, the system 100 by way the processing unit 112, identifies one or more anomalies present on the geofence region from the plurality of images and videos via an AI/ML technique related to image pattern design detection. Further, the system 100 by way the processing unit 112, synchronizes framewise time of the plurality of imaging units with the real time geographical location of the vehicle via a Network Time Protocol (NTP) technique. Further, the system 100 by way the processing unit 112, synchronizes time of the plurality of imaging units by way of the NTP to stitch a plurality of frames of a plurality of videos of the plurality of images and videos. Further, the system 100 by way the processing unit 112, de-duplicates the one or more anomalies that are present in more than one frame of the plurality of frames and generate a processed video wherein the location of each anomaly of the one or more anomalies are indicated on the geofence region.
FIG. 2 illustrates a block diagram of a data processing apparatus 110 of FIG. 1, in accordance with an exemplary embodiment of the present disclosure. As discussed, the data processing apparatus 110 includes the processing circuitry 112 and the database 114. Further, the data processing apparatus 112 may include a network interface 200 and an input/output (I/O) interface 202. The processing circuitry 112, the database 114, the network interface 200, and the I/O interface 202 may communicate with each other by way of a first communication bus 204. In some embodiments of the present disclosure, the processing circuitry 112 may include a registration engine 206, a data collection engine 208, a data processing engine 210, a time synchronization engine 212, and a display engine 214. The registration engine 206, the data collection engine 208, the data processing engine 210, the time synchronization engine 212, and the display engine 214 may communicate with each other by way of a second communication bus 216. It will be apparent to a person having ordinary skill in the art that the data processing apparatus 110 is for illustrative purposes and not limited to any specific combination of hardware circuitry and/or software.
The network interface 200 may include suitable logic, circuitry, and interfaces that may be configured to establish and enable a communication between the data processing apparatus 110 and different components of the system 100 (e.g., the user device 102, the imaging units 104, the positioning unit 106), via the communication network 108. The network interface 200 may be implemented by use of various known technologies to support wired or wireless communication of the data processing apparatus 110 with the communication network 108. The network interface 200 may include, but is not limited to, an antenna, a RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a SIM card, and a local buffer circuit.
The I/O interface 202 may include suitable logic, circuitry, interfaces, and/or code that may be configured to receive inputs and transmit server outputs (i.e., one or more outputs generated by the data processing apparatus 110) via a plurality of data ports in the data processing apparatus 110. The I/O interface 202 may include various input and output data ports for different I/O devices. Examples of such I/O devices may include, but are not limited to, a touch screen, a keyboard, a mouse, a joystick, a projector audio output, a microphone, an image-capture device, a liquid crystal display (LCD) screen and/or a speaker.
The processing circuitry 112 may be configured to perform one or more operations associated with the system 100 by way of the registration engine 206, the data collection engine 208, the data processing engine 210, the time synchronization engine 212, and the display engine 214. In some embodiments of the present disclosure, the registration engine 206 may be configured to enable a user to register into the system 100 by providing registration data through a registration menu (not shown) of the user device 102. The registration data may include, but is not limited to, a name, a demographics, a contact number, an address, and the like. Embodiments of the present disclosure are intended to include or otherwise cover any type of the registration data. In some embodiments, the registration engine 206 may be further configured to enable the user to create a login identifier and a password that may enable the user to subsequently login into the system 100. The registration engine 206 may be configured to store the registration data associated with the user, the login and the password associated with the user in a Look Up Table (LUT) (not shown) provided in the database 114.
The data collection engine 208 may be configured to enable a user to provide data corresponding to a geofence region by way of the user device 102 wherein the geofence region represents a region of the road to be surveyed. The data corresponding to the geofence region may include, but are not limited to, landmarks corresponding to the road to be surveyed, co-ordinates of the initial and final location of the road to be surveyed, length of the segment of the road to be surveyed, and the like. Aspects of the present disclosure are intended to include or otherwise cover any data related to the geofence region. The data collection engine 208 may further be configured to transfer the data to the data processing engine 210.
The data processing engine 210 may be configured to monitor the real time geographical location of the vehicle such that when the vehicle has reached to the geofence region, then enable the plurality of imaging units 104 to capture the plurality of images and videos of the geofence region, and when the vehicle has not reached to the geofence region then enable the plurality of imaging units 104 to be on a standby mode. The data processing engine 210 may further be configured to determine a start point and an end point of the geofence region when the vehicle has reached to the geofence region based on the plurality of images and videos, the real time geographical location of the vehicle, and the data corresponding to the geofence region. Further, the data processing engine 210 may further be configured to identify one or more anomalies present on the geofence region from the plurality of images and videos. Specifically, the data processing engine 210 may be configured to identify the one or more anomalies by way an AI/ML technique related to image pattern design detection. The data processing engine 210 may further be configured to de-duplicate the one or more anomalies that are present in more than one frame of the plurality of frames of the plurality videos from the plurality of images and videos. Specifically, the data processing engine 210 may be configured to de-duplicate the one or more anomalies to avoid duplicate counting of the one or more anomalies that are present in more than one frame of the plurality of frames. The data processing engine 210 may further be configured to determine dimensions of the one or more anomalies by way of the AI/ML technique. The data processing engine 210 may be configured to receive synchronized framewise time of the plurality of imaging units 104 with the real time geographical location of the vehicle from the time synchronization engine 212. The data processing engine 210 may further be configured to generate a processed video wherein the location of each anomaly of the one or more anomalies are indicated. The data processing engine 210 may further be configured to transmit the processed video to the display engine 214.
In some embodiments of the present disclosure, the data processing engine 210 may be configured to generate a dashboard and store the dashboard in the database 114. The database 114 may be configured to store the dashboard such that the database 114 maintains record that may be associated with road surveys.
In some embodiments of the present disclosure, the data processing engine 210 may be configured to identify black spot. Specifically, the data processing engine 210 may be configured to identify the black spot by way of the one or more ML techniques and the one or more AI techniques. The term “black spot” as used herein refers to a region of the road where accidents occur frequently or repeatedly.
The time synchronization engine 212 may be configured to synchronize framewise time of the plurality of imaging units 104 with the real time geographical location of the vehicle by way of a Network Time Protocol (NTP) technique. In other words, the time synchronization engine 212 may be configured to indicate real time location of each anomaly in the processed video by associating coordinates to each anomaly present in each frame of the plurality of frames by way of the positioning unit 106 and the Network Time Protocol technique. Specifically, the time synchronization engine 212 may be configured to synchronize time of the plurality of imaging units 104 with the positioning unit 106 by way of NTP technique thereby synchronizing each frame of the plurality of videos with corresponding coordinates of segment of the geofence region. The time synchronization engine 212 thereby facilitate in determining exact distance of each anomaly from the start point and/or the end point. Furthermore, the time synchronization engine 212 may further be configured to synchronize time of the plurality of imaging units 104 by way of the NTP to stitch the plurality of frames of the plurality of videos of the plurality of images and videos. The time synchronization engine 212 may provide the synchronized framewise time of the plurality of imaging units 104 with the real time geographical location of the vehicle to the data processing engine 210.
The display engine 214 may be configured to receive the processed video from the data processing engine 210. Further, the display engine 214 may be configured to display the processed video to the user device 102. In some embodiments of the present disclosure, the display engine 214 may be configured to retrieve the dashboard from the database 114 to display the dashboard to the user. The dashboard may include an excel report that may include co-ordinates of the anomalies, image of the anomalies, and type of the anomalies. The dashboard may further include the processed video and facilitate the user to play the processed video.
FIG. 3A-3B illustrates a flowchart depicting a method 300 for conducting a survey of a road, in accordance with an embodiment of the present disclosure. The method 300 may include following steps for conducting the survey of the road.
At step 302, the system 100 may be configured to receive data corresponding to a geofence region by way of a user device 102. The geofence region may represent a region of the right of way (ROW) of the road to be surveyed.
At step 304, the system 100 may be configured to determine a start point and an end point of the geofence region when the vehicle has reached to the geofence region based on the plurality of images and videos, the real time geographical location of the vehicle, and the data corresponding to the geofence region.
At step 306, the system 100 may be configured to monitor the real time geographical location of the vehicle within the geofence and may enable the plurality of imaging units 104 to simultaneously capture a plurality of images and videos of the geofence region when the vehicle has reached to the geofence region, and when the vehicle has not reached to the geofence region then the system 100 may enable the plurality of imaging units 104 to be on a standby mode.
At step 308, the system 100 may be configured to identify one or more anomalies present on the geofence region from the plurality of images and videos. Specifically, the system 100 may be configured to identify one or more anomalies by way an AI/ML technique related to image pattern design detection.
At step 310, the system 100 may be configured to synchronize framewise time of the plurality of imaging units with the real time geographical location of the vehicle by way of a Network Time Protocol (NTP) technique. Specifically, the system 100 may be configured to synchronize time of the plurality of imaging units 104 with the positioning unit 106 by way of NTP technique thereby synchronizing each frame of the plurality of frames of the plurality of videos with corresponding coordinates of segment of the geofence region.
At step 312, the system 100 may be configured to synchronize a time of the plurality of imaging units by way of the NTP. Specifically, the system 100 may be configured to synchronize a time of the plurality of imaging units for stitching a plurality of frames of a plurality of videos of the plurality of images and videos.
At step 314, the system 100 may be configured to de-duplicate the one or more anomalies that are present in more than one frame of the plurality of frames. Specifically, the system 100 to de-duplicate the one or more anomalies to avoid duplicate counting of the one or more anomalies that are present in more than one frame of the plurality of frames.
At step 316, the system 100 may be configured to determine dimensions of the one or more anomalies via the AI/ML technique.
At step 318, the system 100 may be configured to generate a processed video. The processed video may include the location of each anomaly of the one or more anomalies are indicated.
At step 320, the system 100 may be configured to transmit the processed video to the user device. Specifically, the system 100 may be configured to transmit the processed video to the user device to display the processed video to the user.
Thus, the system 100 and the method 300 of the present disclosure advantageously facilitates surveying of roads that offers numerous advantages over traditional methods. Firstly, the system 100 significantly reduces labor requirements and survey time, leading to cost savings and increased efficiency. Further, the system 100 is automated and more efficient, cost effective, and accurate than the traditional system and methods for surveying the roads. Automation enables faster data collection and analysis, allowing for more frequent and comprehensive assessments of road conditions. Additionally, technology-driven surveys tend to be more objective, minimizing subjective biases and ensuring greater accuracy in identifying maintenance needs. Furthermore, the method 300 enhances safety by reducing the need for manual inspections on busy or hazardous road sections, thereby mitigating risks to surveyors and other road users. Overall, the system 100 and the method 300 of surveying techniques enable proactive maintenance, leading to improved road safety, reduced congestion, and enhanced transportation infrastructure management.
The foregoing discussion of the present disclosure has been presented for purposes of illustration and description. It is not intended to limit the present disclosure to the form or forms disclosed herein. In the foregoing Detailed Description, for example, various features of the present disclosure are grouped together in one or more embodiments, configurations, or aspects for the purpose of streamlining the disclosure. The features of the embodiments, configurations, or aspects may be combined in alternate embodiments, configurations, or aspects other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention the present disclosure requires more features than are expressly explained hereinabove.
Moreover, though the description of the present disclosure has included description of one or more embodiments, configurations, or aspects and certain variations and modifications, other variations, combinations, and modifications are within the scope of the present disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative embodiments, configurations, or aspects to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, or ranges, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter. ,CLAIMS:1. A system (100) to conduct a survey of a road, the system (100) comprising;
a plurality of imaging units (104) that are disposed on a vehicle moving on the road and configured to capture a plurality of images and videos of the road;
at least one positioning unit (106) disposed on the vehicle that is coupled to the plurality of imaging units (104) and configured to detect a real time geographical location of the vehicle;
a user device (102) that is coupled to the plurality of imaging units (104) and the at least one positioning unit (106), and configured to enable a user to provide data corresponding to a geofence region wherein the geofence region represents a region of right of way (ROW) of the road to be surveyed;
processing circuity (112) coupled to the plurality of imaging units (104), the at least one positioning unit (106), and the user device (102), and configured to;
determine a start point and an end point of the geofence region when the vehicle has reached to the geofence region based on the real time geographical location of the vehicle and the data corresponding to the geofence region;
identify one or more anomalies present within the geofence region from the plurality of images and videos by way of an AI/ML technique related to image pattern design detection;
synchronize framewise time of the plurality of imaging units with the real time geographical location of the vehicle by way of a Network Time Protocol (NTP) technique; and
generate a processed video wherein the location of each anomaly of the one or more anomalies are indicated.
2. The system (100) as claimed in claim 1, wherein the processing circuitry (112) is further configured to monitor the real time geographical location of the vehicle with in the geofence region, and enable the plurality of imaging units to simultaneously capture the plurality of images and videos of the geofence region from the start point to the end point of the geofence region when the vehicle has reached to the start point of geofence region and enable the plurality of imaging units to be on a standby mode when the vehicle has not reached to the start point of geofence region.
3. The system (100) as claimed in claim 1, wherein the at least one positioning unit (104) comprising at least one of a global positioning system (GPS), a global information system (GIS), a distance measurement instruments (DMIs), or an inertial measurement unit (IMUs).
4. The system (100) as claimed in claim 1, wherein the processing circuitry (112) is further configured to synchronize time of the plurality of imaging units by way of the NTP to stitch a plurality of frames of the plurality of videos.
5. The system (100) as claimed in claim 1, wherein the processing circuitry (112) is further configured to determine dimensions of the one or more anomalies by way of the AI/ML technique.
6. The system (100) as claimed in claim 1, wherein the processing circuitry (112) is further configured to de-duplicate the one or more anomalies that are present in more than one frame of the plurality of frames.
7. The system (100) as claimed in claim 1, wherein the processing circuitry (112) is further configured to transmit the processed video to the user device.
8. A method (300) for conducting a survey of a road, the method (300) comprising;
receiving (302), by way of a user device (102), data corresponding to a geofence region wherein the geofence region represents a region of right of way (ROW) of the road to be surveyed;
determining (304), by way of processing circuitry (112) that is coupled to the user device (102), a plurality of imaging units (104) disposed on a vehicle, and at least one positioning unit (106) configured to detect a real time geographical location of the vehicle, a start point and an end point of the geofence region when the vehicle has reached to the geofence region based on the real time geographical location of the vehicle and the data corresponding to the geofence region;
monitoring (306), by way of the processing circuitry (112), the real time geographical location of the vehicle with in the geofence region along with enabling the plurality of imaging units (104) to simultaneously capture the plurality of images and videos of the geofence region from the start point to the end point of the geofence region when the vehicle has reached to the start point of geofence region, and enabling the plurality of imaging units (104) to be on a standby mode when the vehicle has not reached to the start point of geofence region;
identifying (308), by way of the processing circuitry (112), one or more anomalies present on the geofence region from the plurality of images and videos via an AI/ML technique related to image pattern design detection;
synchronizing (310), by way of the processing circuitry (112), framewise time of the plurality of imaging units (104) with the real time geographical location of the vehicle via a Network Time Protocol (NTP) technique;
synchronizing (312), by way of the processing circuitry (112), time of the plurality of imaging units via the NTP for stitching a plurality of frames of the plurality of videos;
de-duplicating (314), by way of the processing circuitry (112), the one or more anomalies that are present in more than one frame of the plurality of frames;
determining (316), by way of the processing circuitry (112), dimensions of the one or more anomalies via the AI/ML technique; and
generating (318), by way of the processing circuitry (112), a processed video wherein the location of each anomaly of the one or more anomalies are indicated.
9. The method as claimed in claim 8, further comprising transmitting (320), by way of the processing circuitry (112), the processed video to the user device.
| # | Name | Date |
|---|---|---|
| 1 | 202411029899-STATEMENT OF UNDERTAKING (FORM 3) [12-04-2024(online)].pdf | 2024-04-12 |
| 2 | 202411029899-PROVISIONAL SPECIFICATION [12-04-2024(online)].pdf | 2024-04-12 |
| 3 | 202411029899-FORM 1 [12-04-2024(online)].pdf | 2024-04-12 |
| 4 | 202411029899-DRAWINGS [12-04-2024(online)].pdf | 2024-04-12 |
| 5 | 202411029899-DECLARATION OF INVENTORSHIP (FORM 5) [12-04-2024(online)].pdf | 2024-04-12 |
| 6 | 202411029899-FORM-26 [12-07-2024(online)].pdf | 2024-07-12 |
| 7 | 202411029899-FORM-5 [11-10-2024(online)].pdf | 2024-10-11 |
| 8 | 202411029899-DRAWING [11-10-2024(online)].pdf | 2024-10-11 |
| 9 | 202411029899-COMPLETE SPECIFICATION [11-10-2024(online)].pdf | 2024-10-11 |
| 10 | 202411029899-PA [03-03-2025(online)].pdf | 2025-03-03 |
| 11 | 202411029899-FORM28 [03-03-2025(online)].pdf | 2025-03-03 |
| 12 | 202411029899-ASSIGNMENT DOCUMENTS [03-03-2025(online)].pdf | 2025-03-03 |
| 13 | 202411029899-8(i)-Substitution-Change Of Applicant - Form 6 [03-03-2025(online)].pdf | 2025-03-03 |
| 14 | 202411029899-FORM28 [13-05-2025(online)].pdf | 2025-05-13 |
| 15 | 202411029899-Covering Letter [13-05-2025(online)].pdf | 2025-05-13 |
| 16 | 202411029899-STARTUP [17-11-2025(online)].pdf | 2025-11-17 |
| 17 | 202411029899-FORM28 [17-11-2025(online)].pdf | 2025-11-17 |
| 18 | 202411029899-FORM 18A [17-11-2025(online)].pdf | 2025-11-17 |
| 19 | 202411029899-FER.pdf | 2025-11-19 |
| 1 | 202411029899_SearchStrategyNew_E_SearchStrategyE_19-11-2025.pdf |