Abstract: ABSTRACT SYSTEM AND METHOD FOR DETECTING TRAFFIC BLOCKERS The present disclosure describes system (200) and method (100) for detecting traffic blockers on a road intersection. Precisely, said system (200) discloses techniques to detect the traffic blockers from at least one image relating to traffic conditions received from the least one image capturing unit at regular time intervals. The system (200) may notify one or more vehicles deviating from traffic rules with respect to the traffic signal status corresponding to the lane.
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION (See section 10, rule 13)
“SYSTEM AND METHOD FOR DETECTING TRAFFIC BLOCKERS"
ZENSAR TECHNOLOGIES LIMITED, of Zensar Knowledge
Park, Plot # 4, Midc, Kharadi, Off Nagar Road, Pune,
Maharashtra 411014, Indian;
The following specification particularly describes the invention and the manner in which it is to be performed
CROSS REFERENCE TO RELATED APPLICATION
This patent application claims priority from an Indian Provisional application (having no. 201921011323) filed on March 23rd, 2019.
TECHNICAL FIELD
The present disclosure relates to system and method for detecting traffic blockers
at road intersections.
BACKGROUND
Road traffic is one major aspect to impact day to day life of individuals. Hence, managing road traffic and ensuring smooth flow of traffic on road is an important tedious task. One important part of managing traffic is regulating traffic at road intersections. It is often seen that at a road intersection, large number of vehicles tend to cause traffic congestion due to negligent driving by not following traffic lights and designated lane. For example, the vehicles that are meant to go straight, block the free left intersection to a connecting road in a case of red traffic light in order to cross the signal, resulting in traffic congestion. In another example, the vehicles that are meant to go right may be positioned at left lane and they tend to block the traffic at left lane by not moving when the traffic signal is green. This induces impatience in the vehicle drivers wanting to take left turns to the connecting roads and cause accidents if the traffic moves in between. Also, it may cause traffic jam and may induce interruption in traffic flow. Consequently, with an un-orderly flow of traffic, the vehicles may lose large amount of fuel at crossings and cause additional pollution in an already polluted environment.
Hence, in order to ensure smooth traffic flow and avoid traffic jam, it is desirable to identify the vehicles that are interrupting traffic flow and send initial warnings or charge them with penalty. Therefore, there is a need to detect traffic blockers at road intersections and to ensure an orderly flow of traffic.
OBJECTS OF THE DISCLOSURE
Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows.
It is an object of the present disclosure to ameliorate one or more problems of the prior art or to at least provide a useful alternative.
An object of the present disclosure is to provide a system for detecting traffic blockers at road intersections.
Another object of the present disclosure is to provide a method for detecting traffic blockers at road intersections.
Yet another object of the present disclosure is to ensure the orderly flow of traffic.
Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
SUMMARY OF THE PRESENT DISCLOSURE
Before the present method, apparatus and hardware enablement’s are described, it is to be understood that this disclosure is not limited to the particular systems, and methodologies described, as there can be multiple possible embodiments of the present disclosure which are not expressly illustrated in the present disclosure. It is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments only and is not intended to limit the scope of the present disclosure.
In an aspect, the present disclosure describes a system for detecting traffic blockers. The system may comprise a memory, a communication unit and a processing unit. The system may be connected to at least one image capturing unit. The processing unit may be coupled to the communication unit and the memory. The communication unit may be configured to receive at least one image relating to traffic conditions from the least one image capturing unit at regular time intervals.
The processing unit may be configured to detect lanes from the received at least one image by identifying edges or lines present in the road. The processing unit then may extract traffic signal status from the image corresponding to the lane. Thereafter the processing unit may determine movement of vehicles with respect to the traffic signal status corresponding to the lane to identify the vehicles blocking the traffic. The processing unit may then notify one or more vehicles deviating from traffic rules with respect to the traffic signal status corresponding to the lane.
In another aspect of the present disclosure, the processing unit may analyse the vehicles position at regular time intervals and determine whether the vehicle is stationary or moving with respect to a given traffic signal status to determine the movement of the vehicles.
In yet another aspect of the present disclosure, the processing unit may extract the traffic signal status from an image different from the image corresponding to the lane, for a given time.
In still another aspect of the present disclosure, the processing unit may extract registration data of the one or more vehicles from the received images using optical character recognition (OCR) technique.
In another aspect of the present disclosure, the processing unit may send the registration data to a server and receive identification data of the one or more vehicles from the server, wherein the identification data comprises of name and contact details of owner of the one or more vehicles.
In another aspect, the present disclosure describes a method for detecting traffic blockers. The method may comprise receiving at least one image relating to traffic conditions from at least one image capturing unit at regular time intervals. Next, the method may comprise detecting lanes from the received at least one image by identifying edges or lines present in the road. The method may then comprise extracting traffic signal status from the image corresponding to the lane. The method may also comprise determining movement of vehicles with respect to the
traffic signal status corresponding to the lane to identify the vehicles blocking the traffic and notifying one or more vehicles deviating from traffic rules with respect to the traffic signal status corresponding to the lane.
In yet another aspect of the present disclosure, determining the movement of the vehicles may comprises analysing the vehicles position at regular time intervals and determining whether the vehicle is stationery or moving with respect to a given traffic signal status.
In still another aspect of the present disclosure, the method may further comprise extracting the traffic signal status from an image different from the image corresponding to the lane, for a given time.
In another aspect of the present disclosure, the method may further comprise extracting registration data of the one or more vehicles from the received images using optical character recognition (OCR) technique.
In yet another aspect of the present disclosure, the method may further comprise sending the registration data to a server and receiving identification data of the one or more vehicles from the server, wherein the identification data comprises of name and contact details of owner of the one or more vehicles.
The following paragraphs are provided in order to describe the best mode of working the method and apparatus and nothing in this section should be taken as a limitation of the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
Further aspects and advantages of the present disclosure will be readily understood from the following detailed description with reference to the accompanying drawings, where like reference numerals refer to identical or similar or functionally similar elements. The figures together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate the
aspects/embodiments and explain various principles and advantages, in accordance with the present disclosure wherein:
Figure 1 illustrates a flow chart of a method for detecting traffic blockers, according to an embodiment of the present disclosure;
Figures 1(a)-(b) illustrates example related to edge detection technique, according to an embodiment of the present disclosure;
Figure 1(c) illustrates example related to Hough line transform technique, according to an embodiment of the present disclosure;
Figure 2 illustrates a block diagram of a system for detecting traffic blockers, according to an embodiment of the present disclosure; and
Figure 3 illustrates a block diagram of a processing unit of the system, according to an embodiment of the present disclosure.
DETAILED DESCRIPTION OF THE PRESENT DISCLOSURE Embodiments, of the present disclosure, will now be described with reference to the accompanying drawings. It should be understood that the disclosed method and apparatus are susceptible to various modifications and alternative forms; specific embodiments thereof have been shown by way of example in the drawings and will be described in detail below.
Before describing in detail embodiments, it may be observed that the novelty and inventive step that are in accordance with the present disclosure resides in the method and apparatus for detecting traffic blockers. It is to be noted that a person skilled in the art can be motivated from the present disclosure and modify the various constructions of apparatus or various steps of the method. However, such modification should be construed within the scope of the present disclosure. Accordingly, the drawings show only those specific details that are pertinent for understanding the embodiments of the present disclosure so as not to obscure the
disclosure with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a,” "an," and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms "comprises," "comprising," “including,” and “having,” are open ended transitional phrases and therefore specify the presence of stated features, integers, steps, operations, elements, modules, units and/or components, but do not forbid the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The particular order of steps disclosed in the method and process of the present disclosure is not to be construed as necessarily requiring their performance as described or illustrated. It is also to be understood that additional or alternative steps may be employed.
Although any apparatus and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the preferred, apparatus and methods are now described.
Fig. 1 illustrates a flow chart describing a method (100) for detecting traffic blockers according to an embodiment of the present disclosure.
As shown in fig. 1, at step 101, at least one image relating to traffic conditions may be received at regular time intervals from at least one image capturing unit (208). The at least one image may be received via a communication unit (204) coupled to the at least one image capturing unit (208). The at least one image may capture traffic signal and vehicles present at the road intersection. In an embodiment, the regular interval may vary from few seconds to few minutes. For example, the at least one image may be received in every 10 seconds or every minute. Hence, the regular interval to receive the image may be set from few seconds to few minutes.
In an embodiment, the regular interval to receive the image may be set in accordance with the frequency in which a traffic signal is changed for a corresponding lane. For example, if the traffic signal for a particular traffic light corresponding to a particular lane is turned green every 30 seconds, then the regular interval may be set to 30 seconds.
In another embodiment, instead of receiving a single image, a plurality of images may be received at step 101. It may be noted that each of the plurality of images may be captured at the same timestamp by one or more image capturing units deployed within vicinity of the traffic signal. In one aspect, each image may be captured in a manner such that it includes view of traffic being passed or halted at road intersection from different angles and positions and status of traffic signal. The status, here, may indicate one of ‘Red’, ‘Yellow’ and ‘Green’. To elucidate the aforementioned, one image of the plurality of images, may be captured to include the status of the traffic signal. Whereas, another image, of the plurality of images, may capture one or more vehicles halted, at the road intersection, on a free lane and obstructing the following traffic to pass through the free lane.
After receiving the at least one image, the lanes present on the road may be detected at step 103. For example, the received image may be processed to determine how many and what type of lanes are present on the road. In an example the lanes may be a left lane, a straight lane and a right lane. In order to detect the lanes, the received image is transmitted to the communication unit (204) for further processing by the processing unit (206). In one embodiment, the processing unit (206) processes the received image by executing an edge detection technique. It may be noted that edge detection technique may include a variety of mathematical methods to identify points in the received image at which the image brightness changes sharply or, more formally. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. In this particular technique, sudden changes of discontinuities in an image are called as edges. Most of the shape information of an image is enclosed in edges as shown in figs. 1(a) and 1(b). It may further be noted that the edge detection technique is a tool in image
processing, machine vision and computer vision, particularly in the areas of feature detection and feature extraction.
In order to detect edges present in the received image, a Canny edge detection may be used. The Canny edge detection is an edge detection operator that uses a multi¬stage technique to detect a wide range of edges in the received image. Upon detection of the wide range of edges, a Gaussian blur may be applied to remove noise from the wide range of edges detected. Once the noise has been removed, a Sobel edge detection is performed to find intensity gradients of the edges in the received image and thereby derives a filtered image emphasizing edges in the received image. Subsequently, Sobel edge detection computes derivative of a curve fitting the gradient between light and dark areas in filtered image and finds the peak of the derivative, which is interpreted as the location of an edge pixel and removes pixels which seem too far from any edge.
In one embodiment, a double threshold may also be applied to determine potential edges. The potential edges may be determined by eliminating extraneous pixels caused by noise or milder color variation. It may be noted that, if a pixel’s gradient value – based on the Sobel differential is greater than a high threshold value, it is considered a strong candidate pixel for an edge. On the contrary, if the gradient is less than a low threshold value, it is turned off and accordingly eliminated from the filtered image. In an embodiment, if the gradient is in between the high threshold value and the low threshold value, the pixel is considered a weak candidate pixel for an edge pixel. These weak candidate pixels are further examined to determine whether they are connected to the strong candidate pixels. If it is determined that they are connected to the strong candidate pixels, the weak candidate pixels are also considered to be edge pixels else the remaining, non-connected and weak candidates’ pixels are turned off.
Thus, in this manner, the edges may be detected in the received image captured by one or more image capturing units. It should be noted that any other technique
known to a person skilled in the art may be used to detect the edges, which will fall within the scope of the present disclosure.
Upon detection of the edges in the received image, lanes may be detected in the received image. In an embodiment, a Hough line transformation technique is used to detect lanes to the road intersection detected using the edges as elucidated aforementioned. It may be noted that the Hough line transformation may identifies lines on the image. In the Hough line transformation, a line may be represented by following equation:
r= x cosθ + y sinθ If the points identified in the image are plotted on a graph, we may get a sinusoid. If the curves of two different points intersect in the plane θ - r, that means that both points belong to a same line. In other words, a line can be detected by finding the number of intersections between curves. The more curves intersecting means that the line represented by that intersection have more points. In general, a threshold of the minimum number of intersections needed to detect a line may be defined. In fig. 1(c), the bright line is the identified line.
It should be apparent that any other technique known to a person skilled in the art may be used to identify the lines in the image, which will fall within the scope of the present disclosure.
In another embodiment, the lanes may be identified using supervised learning principle.
In another embodiment, plurality of images may be processed simultaneously to detect the lanes. For example, two images of plurality of images be processed to detect the lanes by using techniques as described above. In one scenario, one image may be used to identify left lane and another image may be used to identify right lane. In another scenario, two images may be processed together to identify the left lane. Two other images may be used to identify the right lane. In yet other scenario, two images may be processed together to identify the left and right lanes.
After detection of lanes in the received image, at step 105, status of the traffic signal from the image corresponding to the lane may be extracted. In other words, the received image in which the lanes are already detected lanes may be used to extract the status. In an embodiment, the received image may be processed in a manner such that the processing unit (206), at same timestamp, determines the status of the traffic signal as well as vehicle obstructing the lanes that are meant to be free for the following vehicles. For example, if a left lane is detected in the received image. Then, by processing this image, it may be determined that the traffic signal corresponding to the left lane is green.
In another embodiment, the traffic signal status may be extracted from another image which is different from the image corresponding to the lane, at a given time. For example, a different image in which lanes are not detected may be used to extract the traffic signal status.
In another embodiment, more than one images may be processed simultaneously to extract the traffic signal status. In an embodiment, image processing techniques may be applied to recognize the color of the traffic signals. Any known image processing technique may be used to recognize the color of the traffic signals. Few non-limiting examples of image processing techniques may be Color Thresholding, BLOB Analysis and Morphological Filters, Spot Light Detection, etc. In another embodiment, the status of the traffic signal may be extracted using supervised learning principle.
Now, at step 107, movement of vehicles with respect to the traffic signal status corresponding to the lane may be determined. To elucidate the aforementioned, in the received image, it may be determined that the traffic signal for the left lane is green. In this situation, every vehicle present in the left lane shall be moving. Now consider a scenario where a vehicle in the left lane is not moving as he may intend to go right or straight and is waiting for right moment to make the required turn. This vehicle may cause a traffic blockage in the left lane. In another scenario, a vehicle standing in right lane or straight lane and take a turn towards left lane as he
may intend to move towards the left lane. Such a vehicle may also cause traffic blockage. The proposed method may determine movement of vehicles with respect to the traffic signal status corresponding to the lane to identify such vehicles blocking the traffic. To determine the movement of the vehicle, vehicles position at regular time intervals may be analysed and it may be determined whether the vehicle is stationary or moving with respect to a given traffic signal status. For example, if traffic line for the left signal is green for 30 seconds, the images received in this period i.e. 30 seconds may be analysed to determine whether the vehicles have moved from their positions or not. If the vehicles have moved, then they may not be considered as blocking the traffic. On the other hand, even if a single vehicle has not moved, then the vehicle may be considered as blocking the traffic. In another embodiment, the distance of the vehicles from the traffic signals may also be monitored to determine the movement of the vehicles.
It should be apparent that any other technique known to a person skilled in the art may be used to identify the lines in the image, which will fall within the scope of the present disclosure.
It is to be noted that determination of the movement in vehicle position is only one way of identifying vehicles blocking the traffic. Any other technique that may be used to identify the vehicles blocking the traffic will also fall within the scope of the present disclosure.
After identifying the vehicles blocking the traffic, such vehicles may be notified about deviating from the traffic rules, at step 109. In an embodiment, the owner of the vehicle may be notified in the form of messages. In another embodiment, the owner of the vehicle may be notified in the form of pre-recorded audio message. In another embodiment, a message can be sent to traffic police to regulate the traffic in that particular traffic signal and impose on spot fine on the vehicle. It should be noted that any other means may be used to notify the owner which may be apparent to a person skilled in the art and will fall within the scope of the present disclosure.
To notify the owner of the vehicle, it is necessary to have the details of the owner of the vehicle. To get the said details, registration data of the one or more vehicles blocking the traffic from the received images may be extracted. In an embodiment, optical character recognition (OCR) technique may be used to extract the registration data. Any other technique that may be used to extract the registration number will also fall within the scope of the present disclosure. The registration number may be the number which identifies the vehicle and its owner. In an embodiment, the registration number may be alpha-numeric or numeric or alphabetic.
The registration data may them be sent to a server (210) storing information/data related to the vehicles. In response to sending the registration data, the processing unit (206) may receive identification data of the one or more vehicles from the server (210). In an embodiment, the identification data may comprise of name and contact details of owner of the one or more vehicles.
Further, according to an aspect of the present disclosure, the disclosed techniques may be performed periodically.
The present disclosure also describes a system to detect traffic blockers at road intersections and a method thereof. The system (200) is now being described with reference to Fig. 2.
Fig. 2 illustrates a block diagram of a system (200) to detect traffic blockers at road intersections. As shown in fig. 2, the system (200) may comprise a memory (202), a communication unit (204) and a processing unit (206). The communication unit (204) may be coupled to at least one image capturing unit (208). In an embodiment, the communication unit (204) may receive at least one image from the at least one image capturing unit (208) at regular intervals, as explained above in reference to fig. 1. The at least one image may capture traffic signal and vehicles present at the road intersection.
In another embodiment, the communication unit (204) may receive plurality of images at same time from one or more than one image capturing units. The plurality of images may represent view of traffic and traffic signal at road intersection from different angles and positions, as explained above in reference to fig. 1.
In an embodiment, the image capturing units may be located at different places such as the traffic signal or nearby area of the traffic signals or vehicles present at the road intersection etc. In an embodiment, the image capturing unit may be surveillance camera placed at the traffic signal or cameras placed at any building/property nearby the road intersections. In an embodiment, the image capturing unit may be a camera mounted at a vehicle positioned at the road at a given time. It should be apparent to a person skilled in the art that any devices capable of capturing image of the road intersection may constitute as image capturing unit. The communication unit (204) is capable of interacting with the said image capturing units.
In an embodiment, the image capturing units may be surveillance camera placed at different places such as the traffic signal or cameras mounted at any building/property nearby the road intersections. In an embodiment, the image capturing unit may be a camera mounted at a vehicle positioned at the road at a given time. It should be apparent to a person skilled in the art that any devices capable of capturing image of the road intersection may constitute as the image capturing unit. In another embodiment, the image capturing unit may comprise a database storing images captured from the cameras. In other words, the images may be received directed from the cameras or from the databases storing the images.
The communication unit 204 and the at least one image capturing unit (208) may be paired with each other to facilitate data transfer. In an embodiment, the pairing between the communication unit 204 and the at least one image capturing unit (208) may be either by means of a wired connection or a wireless connection.
As shown in fig. 2 the processing unit (206) may be coupled to the memory (202) and the communication unit (204). In an embodiment, the processing unit (206) may process the at least one image to detect lanes on the road. For example, the received image may be processed to determine how many and what type of lanes are present on the road. In a scenario, a particular road intersection may have three lanes i.e. a left lane, a straight lane and a right lane. This particular information about lanes may be determined from the said image.
In an embodiment, the lanes may be detected using edge detection technique, as explained above in reference to fig. 1. In an embodiment, the Hough line transformation may be implemented to detect lanes to the connecting roads, as explained above in reference to fig. 1. It should be apparent that any other technique known to a person skilled in the art may be used to detect the lanes. In an embodiment, the lanes may be identified using supervised learning principle. For example, the processing unit (206) may be trained with exemplary data set to detect the lanes in the image. The processing unit (206) may be trained using techniques known to a person skilled in the art.
In an embodiment, the processing unit may detect the lanes from a single image. In another embodiment, the processing unit (206) may process plurality of images simultaneously to detect the lanes, as explained above in reference to fig. 1.
The processing unit (206) may process the at least one image corresponding to the lane to extract traffic signal status. For example, the processing unit (206) may process the same image in which the lanes are detected to extract the status of the traffic signal. In an embodiment, the at least one image represent the view of the traffic signal and the processing unit (206) may process the image to know the status of the traffic signal. The processing unit (206) may extract the traffic signal status in the similar manner, as explained above in reference to fig. 1.
In an embodiment, the processing unit (206) may apply image processing techniques to recognize the color of the traffic signals. In other words, the
processing unit (206) may recognize the status of traffic relative to the lane. For example, the processing unit (206), at a particular time, may detect that there is a left lane and a right lane and the traffic signal corresponding to the left lane is green. In another embodiment, the processing unit (206) may extract the traffic signal status from another image which is different from the image corresponding to the lane, at a given time. For example, a different image in which lanes are not detected may be used to extract the traffic signal status. Also, the processing unit (206) may process more than one image simultaneously to extract the traffic signal status.
The processing unit (206) may be then determine movement of the identified vehicles in the image. In an embodiment, the processing unit (206) may analyse vehicles position at regular time intervals and determine whether the vehicle is stationary or moving with respect to a given traffic signal status. For example, the processing unit (206) may analyse plurality of images at regular intervals to determine movement in the position of the vehicles with respect to a given traffic signal status. The processing unit (206) may determine the movement of the vehicles in the similar manner, as explained above in reference to fig. 1.
It is to be noted that determination of the movement in vehicle position is only one way of identifying vehicles blocking the traffic. Any other technique that may be used to identify the vehicles blocking the traffic will also fall within the scope of the present disclosure.
The processing unit (206) then may extract registration data of the one or more vehicles blocking the traffic from the received images. In an embodiment, the processing unit (206) may use optical character recognition (OCR) technique to extract the registration data. Any other technique that may be used to extract the registration number will also fall within the scope of the present disclosure. The registration number may be the number which identifies the vehicle and its owner. In an embodiment, the registration number may be alpha-numeric or numeric or alphabetic.
The processing unit (206) may then send the registration data to a server (210) storing information/data related to the vehicles. In an example, the server (210) may be a server related to regional transport authority or regional transport office (RTO), storing data/information of vehicles registered in a particular region/state. In another embodiment, the server (210) may be a centralized server storing information/data of vehicles registered in across a country. In response to sending the registration data, the processing unit (206) may receive identification data of the one or more vehicles from the server (210). In an embodiment, the identification data may comprise of name and contact details of owner of the one or more vehicles.
The processing unit (206) may then notify the owner of one or more vehicles that they are deviating from traffic rules with respect to the traffic signal status. In an embodiment, the processing unit (206) may notify the owner in the form of messages. In another embodiment, the processing unit (206) may notify the owner in the form of pre-recorded audio message. In another embodiment, the processing unit (206) may send a message to traffic police to regulate the traffic in that particular traffic signal and impose on spot fine on the vehicle. It should be noted that any other means may be used to notify the owner which may be apparent to a person skilled in the art and will fall within the scope of the present disclosure.
In an embodiment, the memory (202) may store the received identification data corresponding to the identified vehicles with their vehicle registration numbers and corresponding vehicle owner’s contact number and name. In an embodiment, the database 112 may store the data in a look up table. In this embodiment, the processing unit (206) may try to retrieve the identification data from the memory (202) before sending the registration number to the server (210).
Fig. 3 illustrates a block diagram of the processing unit (206) according to an embodiment of the present disclosure. As shown in fig. 3, the processing unit may comprise of a lane detector (301), a traffic signal status detector (303), a vehicle position detector (305) and a data extractor (307). The lane detector (301) may be coupled to the traffic signal status detector (303) and the vehicle position detector
(305). The vehicle position detector (305) may be coupled to the traffic signal status detector (303) and the data extractor (307). In an embodiment, the various blocks of the processing unit (206) may be configured to perform various functions of the processing unit (206). For example, the lane detector (301) may be configured to detect the lanes from the images received from the at least one image capturing unit (208). The traffic signal status detector (303) may be configured to extract the traffic signal status from the image. The vehicle position detector (305) may be configured to determine the movement of the vehicles from the images. The data extractor (307) may be configured to extract the registration number of the vehicles and further communicate with the communication unit to notify the owner of the vehicles.
Further, in one implementation, the processing unit (206), the lane detector (301), traffic signal status detector (303), the vehicle position detector (305) and the data extractor (307) may be implemented using one or more processor(s) or microcontroller(s).
It would be appreciated that any of the foregoing may be implemented in the form of software, hardware and/or firmware. Further, these functionalities may be implemented using an application specific integrated circuit (ASIC), an electronic circuit, a steering assistance controller (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
The foregoing description of the embodiments has been provided for purposes of illustration and not intended to limit the scope of the present disclosure. Individual components of a particular embodiment are generally not limited to that particular embodiment, but are interchangeable. Such variations are not to be regarded as a departure from the present disclosure, and all such modifications are considered to be within the scope of the present disclosure.
TECHNICAL ADVANCES AND ECONOMICAL SIGNIFICANCE
The present disclosure described herein above has several technical advantages
including, but not limited to, the realization of a system for detecting traffic blockers
at road intersections and a method thereof, that:
-send warnings or charge the traffic blockers;
-ensure the orderly flow of traffic;
-reduces mishaps on road; and
-reduces the wastage of fuel.
The foregoing description of the specific embodiments so fully reveals the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
Any discussion of documents, acts, materials, devices, articles or the like that has been included in this specification is solely for the purpose of providing a context for the disclosure. It is not to be taken as an admission that any or all of these matters form a part of the prior art base or were common general knowledge in the field relevant to the disclosure as it existed anywhere before the priority date of this application.
While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the
disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
The language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the disclosure be limited not by this detailed description, but rather by the following claims. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope, which is set forth in the following claims.
LIST OF REFERENCE NUMERALS USED IN DETAILED DESCRIPTION
AND DRAWING
200 – System
202 – memory
204 – communication unit
206 – processing unit
208 – image capturing unit
210 – server
301- lane detector
303- traffic signal status detector
305- vehicle position detector
307- data extractor
We Claim:
1. A system (200) for detecting traffic blockers, comprising:
a memory (202);
a communication unit (204) configured to receive at least one image relating to traffic conditions from at least one image capturing unit (208) at regular time intervals; and
a processing unit (206) coupled to the communication unit (204) and the memory (202), the processing unit (206) is configured to:
detect lanes from the received at least one image by identifying edges or lines present in the road;
extract traffic signal status from the image corresponding to the lane; determine movement of vehicles with respect to the traffic signal status corresponding to the lane to identify the vehicles blocking the traffic; and
notify one or more vehicles deviating from traffic rules with respect to the traffic signal status corresponding to the lane.
2. The system (200) as claimed in claim 1, wherein the processing unit (206)
while determine the movement of the vehicles is configured to:
analyse the vehicles position at regular time intervals and determine whether the vehicle is stationary or moving with respect to a given traffic signal status.
3. The system (200) as claimed in claim 1, wherein the processing unit (206) is configured to extract the traffic signal status from an image different from the image corresponding to the lane, for a given time.
4. The system (200) as claimed in claim 1, wherein the processing unit (206) is further configured to extract registration data of the one or more vehicles from the received images using optical character recognition (OCR) technique.
5. The system (200) as claimed in claim 4, wherein the processing unit (206)
is further configured to:
send the registration data to a server (210); and
receive identification data of the one or more vehicles from the server (210), wherein the identification data comprises of name and contact details of owner of the one or more vehicles.
6. A method (100) for detecting traffic blockers comprising:
receiving (101) at least one image relating to traffic conditions from at least one image capturing unit (208) at regular time intervals;
detecting (103) lanes from the received at least one image by identifying edges or lines present in the road;
extracting (105) traffic signal status from the image corresponding to the lane;
determining (107) movement of vehicles with respect to the traffic signal status corresponding to the lane to identify the vehicles blocking the traffic; and
notifying (109) one or more vehicles deviating from traffic rules with respect to the traffic signal status corresponding to the lane.
7. The method (100) as claimed in claim 6, wherein determining the movement
of the vehicles comprises:
analysing the vehicles position at regular time intervals and determining whether the vehicle is stationery or moving with respect to a given traffic signal status.
8. The method (100) as claimed in claim 6, further comprises:
extracting the traffic signal status from an image different from the image corresponding to the lane, for a given time.
9. The method (100) as claimed in claim 6, further comprises:
extracting registration data of the one or more vehicles from the received images using optical character recognition (OCR) technique.
10. The method (100) as claimed in claim 9, further comprises:
sending the registration data to a server; and
receiving identification data of the one or more vehicles from the server, wherein the identification data comprises of name and contact details of owner of the one or more vehicles.
| # | Name | Date |
|---|---|---|
| 1 | 201921011323-IntimationOfGrant27-12-2023.pdf | 2023-12-27 |
| 1 | 201921011323-STATEMENT OF UNDERTAKING (FORM 3) [23-03-2019(online)].pdf | 2019-03-23 |
| 2 | 201921011323-PROVISIONAL SPECIFICATION [23-03-2019(online)].pdf | 2019-03-23 |
| 2 | 201921011323-PatentCertificate27-12-2023.pdf | 2023-12-27 |
| 3 | 201921011323-PROOF OF RIGHT [23-03-2019(online)].pdf | 2019-03-23 |
| 3 | 201921011323-CLAIMS [26-11-2021(online)].pdf | 2021-11-26 |
| 4 | 201921011323-POWER OF AUTHORITY [23-03-2019(online)].pdf | 2019-03-23 |
| 4 | 201921011323-DRAWING [26-11-2021(online)].pdf | 2021-11-26 |
| 5 | 201921011323-FORM 1 [23-03-2019(online)].pdf | 2019-03-23 |
| 5 | 201921011323-FER_SER_REPLY [26-11-2021(online)].pdf | 2021-11-26 |
| 6 | 201921011323-FER.pdf | 2021-10-19 |
| 6 | 201921011323-DRAWINGS [23-03-2019(online)].pdf | 2019-03-23 |
| 7 | Abstract1.jpg | 2021-10-19 |
| 7 | 201921011323-DECLARATION OF INVENTORSHIP (FORM 5) [23-03-2019(online)].pdf | 2019-03-23 |
| 8 | 201921011323-Proof of Right (MANDATORY) [07-05-2019(online)].pdf | 2019-05-07 |
| 8 | 201921011323-COMPLETE SPECIFICATION [23-03-2020(online)].pdf | 2020-03-23 |
| 9 | 201921011323-ORIGINAL UR 6(1A) FORM 1-080519.pdf | 2019-12-31 |
| 9 | 201921011323-CORRESPONDENCE-OTHERS [23-03-2020(online)].pdf | 2020-03-23 |
| 10 | 201921011323-DRAWING [23-03-2020(online)].pdf | 2020-03-23 |
| 10 | 201921011323-RELEVANT DOCUMENTS [23-03-2020(online)].pdf | 2020-03-23 |
| 11 | 201921011323-FORM 13 [23-03-2020(online)].pdf | 2020-03-23 |
| 11 | 201921011323-FORM 18 [23-03-2020(online)].pdf | 2020-03-23 |
| 12 | 201921011323-FORM 13 [23-03-2020(online)].pdf | 2020-03-23 |
| 12 | 201921011323-FORM 18 [23-03-2020(online)].pdf | 2020-03-23 |
| 13 | 201921011323-DRAWING [23-03-2020(online)].pdf | 2020-03-23 |
| 13 | 201921011323-RELEVANT DOCUMENTS [23-03-2020(online)].pdf | 2020-03-23 |
| 14 | 201921011323-CORRESPONDENCE-OTHERS [23-03-2020(online)].pdf | 2020-03-23 |
| 14 | 201921011323-ORIGINAL UR 6(1A) FORM 1-080519.pdf | 2019-12-31 |
| 15 | 201921011323-COMPLETE SPECIFICATION [23-03-2020(online)].pdf | 2020-03-23 |
| 15 | 201921011323-Proof of Right (MANDATORY) [07-05-2019(online)].pdf | 2019-05-07 |
| 16 | 201921011323-DECLARATION OF INVENTORSHIP (FORM 5) [23-03-2019(online)].pdf | 2019-03-23 |
| 16 | Abstract1.jpg | 2021-10-19 |
| 17 | 201921011323-DRAWINGS [23-03-2019(online)].pdf | 2019-03-23 |
| 17 | 201921011323-FER.pdf | 2021-10-19 |
| 18 | 201921011323-FER_SER_REPLY [26-11-2021(online)].pdf | 2021-11-26 |
| 18 | 201921011323-FORM 1 [23-03-2019(online)].pdf | 2019-03-23 |
| 19 | 201921011323-POWER OF AUTHORITY [23-03-2019(online)].pdf | 2019-03-23 |
| 19 | 201921011323-DRAWING [26-11-2021(online)].pdf | 2021-11-26 |
| 20 | 201921011323-PROOF OF RIGHT [23-03-2019(online)].pdf | 2019-03-23 |
| 20 | 201921011323-CLAIMS [26-11-2021(online)].pdf | 2021-11-26 |
| 21 | 201921011323-PROVISIONAL SPECIFICATION [23-03-2019(online)].pdf | 2019-03-23 |
| 21 | 201921011323-PatentCertificate27-12-2023.pdf | 2023-12-27 |
| 22 | 201921011323-STATEMENT OF UNDERTAKING (FORM 3) [23-03-2019(online)].pdf | 2019-03-23 |
| 22 | 201921011323-IntimationOfGrant27-12-2023.pdf | 2023-12-27 |
| 1 | search11323E_05-03-2021.pdf |