Abstract: An intelligent automated traffic enforcement system is disclosed, adapted to assist the traffic management department to identify the violation by traffic department personnel by remotely observing the video feeds coming to the control room from the junction through computer monitor. Alternately, the traffic violation can be automatically detected by and automatically alert traffic personnel without physically being present at the traffic junction or sitting in the control room. Advantageously, the system of the invention does not require any specialized or proprietary camera to detect these violations and is adapted to analyzes video feed from traditional security cameras in a computer to detect the events. The system is adapted such that it can capture and process the video on traffic movements to detect the violating vehicles, automatically find the identity of the vehicle such as Number Plate, shape, size, color, logo, type of the vehicle, and possibly the snapshot of the driver if visible.
Field of the Invention
The present invention relates to an intelligent automated traffic enforcement system
adapted to assist the traffic management department to identify the violation by
traffic department personnel by remotely observing the video feeds coming to the
control room from the junction through computer monitor. Alternately, the system is
further adapted such that traffic violation can be automatically detected by and
automatically alert traffic personnel without physically being present at the traffic
junction or sitting in the control room. Advantageously, the system of the invention
does not require any specialized or proprietary camera to detect these violations and
is adapted to analyzes video feed from traditional security cameras in a computer to
detect the events. The system is adapted such that it can capture and process the
video on traffic movements to detect the violating vehicles, automatically find the
identity of the vehicle such as Number Plate, shape, size, color, logo, type of the
vehicle, and possibly the snapshot of the driver if visible. The system is further
adapted to automatically store these information and images of the vehicles in event
log database to benefit the traffic inspector to identify possible violations like red
light violation, over speed vehicle, wrong way vehicle, vehicle rider without helmet,
without wearing seat belt, using mobile phone while driving, motorcycle with more
than two passengers, etc. either by automated video analytic application proposed by
the system and/or manually through computer monitor. Advantageously, the images
can be manually tagged with comments by the traffic personnel or automatically
tagged with possible violation type, and can be manually or automatically sent to
handheld devices of on-field enforcement team through communication network for
subsequent physical action and are also kept in database for future use.
Background of the Invention
Video Management Systems are used for video data acquisition and search processes
using single or multiple servers. They are often loosely coupled with one or more
separate systems for performing operations on the acquired video data such as
analyzing the video content, etc. Servers can record different types of data in storage
media, and the storage media can be directly attached to the servers or accessed
over IP network. This demands a significant amount of network bandwidth to receive
data from the sensors (e.g, Cameras) and to concurrently transfer or upload the data
in the storage media. Due to high demand in bandwidth to perform such tasks,
especially for video data, often separate high speed network are dedicated to transfer
data to storage media. Dedicated high speed network is costly and often require
costly storage devices as well. Often this is overkill for low or moderately priced
installations.
It is also known that to back up against server failures, one or more dedicated fail-
over (sometimes called mirror) servers are often deployed in prior art. Dedicated fail-
over servers remain unused during normal operations and hence resulting in wastage
of such costly resources. Also, a central server process either installed in the failover
server or in a central server is required to initiate the back-up service, in case a
server stops operating. This strategy does not avoid a single point of failure.
Moreover, when the servers and clients reside over different ends in an internet and
the connectivity suffers from low or widely varying bandwidth, transmission of multi-
channel data from one point to another becomes a challenge. Data aggregation
techniques are often applied in such cases which are computationally intensive or
suffer from inter-channel interference, particularly for video, audio or other types of
multimedia data.
As regards analytic servers presently in use it is well known that there are many
video analytics system in the prior art. Video content analysis is often done per frame
basis which is mostly pre defined which make such systems lacking in desired
efficiency of analytics but are also unnecessarily cost extensive with unwanted loss of
valuable computing resources.
Added to the above, in case of presently available techniques of video analysis ,cases
of unacceptable number of false alarms are reported when the content analysis
systems are deployed in a noisy environment for generating alerts in real time. This
is because the traditional methods are not automatically adaptive to demography
specific environmental conditions, varying illumination levels, varying behavioural
and movement patterns of the moving objects in a scene, changes of appearance of
colour in varying lighting conditions, changes of appearance of colours in global or
regional illumination intensity and type of illumination, and similar other factors.
It has therefore been a challenge to identify the appearance of a non-moving foreign
object (static object) in a scene in presence of other moving objects, where the
moving objects occasionally occlude the static object. Detection accuracy suffers in
various degrees under different demographic conditions.
Extraction of particular types of objects (e.g. face of a person, but not limited to) in
images based on fiduciary points is a known technique. However, computational
requirement is often too high for traditional classifier used for this purpose in the
prior art, e.g., Haar classifier.
Also, in a distributed system where multiple sites with independent administrative
controls are present, unification of those systems through a central monitoring
station may be required at any later point of time. This necessitates hardware and OS
independence in addition to the backward compatibility of the underlying
computational infrastructure components, and the software architecture should
accommodate such amalgamation as well.
It would be thus clearly apparent from the above state of the art that there is need
for advancement in the art of sensory input/data such as video acquisition cum
recording and /or analytics of such sensory inputs/data such as video feed adapted to
facilitate fail-safe integration and /or optimized utilization of various sensory inputs
for various utility applications including event/alert generation, recording and related
aspects.
Traffic signal violation is a burning traffic enforcement issue throughout the world.
Beyond optimistic illusions, ground realities are too fierce to be accepted, as the
fearsome road accident, traffic jam are the main effect of the same. Seeds of
improvement are however being planted at all possible arenas but they are very
costly and high human resource consuming too.
In particular, some of the regular challenges for the road transportation department
at the different road junctions include the following:
a) Ensuring that the rules and regulations are followed by each and every vehicle
crosses any junction at any point of time.
b) Enhance Road safety for all types of vehicles and as well as pedestrians.
c) Road transportation department requires intelligent automatic enforcement
system for the surveillance in each traffic junction and for the on-field
enforcement team, allowing them to book offences and access other Transport
department application's events in real time.
d) Ensuring Smooth traffic flow within city / country.
Objects of the Invention
It is thus the basic object of the present invention to provide for an intelligent traffic
enforcement system comprising CCTV IP cameras and Video Analytics Applications
and involving virtual loops (opposed to any physical magnetic Loop) for automatic
detection of offences like 'red signal violation, 'over speeding', 'wrong way vehicle
movement' in every important junction and integrated with the remote traffic
control room.
A further object of the present invention is directed to an intelligent traffic
enforcement system involving Smart phone solution for the on-field enforcement
team allowing traffic personnel to book offences and access other Transport
Department Application's events via GPS / GPRS enabled Mobile / Handheld devices.
Yet another object of the present invention is directed to an intelligent traffic
enforcement system involving setting up of the Control room for backend activities
with complete hardware, software solution and networking.
Another object of the present invention is directed to an intelligent traffic
enforcement system comprising additional data center hardware set-up for Road
Transportation Department to store evidence / archive data for all the relevant
events.
Yet further object of the present invention is directed to an intelligent traffic
enforcement system adapted for connectivity management in real time by data
transfer between the above components to ensure synchronized communication.
Another object of the present invention is directed to advancements in system
discussed above by interconnecting a number of intelligent components consisting of
hardware and software, and involving implementation techniques adapted to make
the system efficient, scalable, cost effective, fail-safe, adaptive to various
demographic conditions, adaptive to various computing and communication
infrastructural facilities.
Summary of the Invention
Thus according to the basic aspect of the present invention there is provided an
intelligent automated traffic enforcement system comprising:
a video surveillance system adapted to localize one or more number plates / License
Plates of vehicles stationary or in motion in the field of view of atleast one camera
without requiring to fix the number plate in a fixed location of the car, the license
plate can be reflective or non-reflective, independent of font and language, and using
normal security camera, and filtering out other texts from the field of view not related
to the number-plate, enabling to process the localized number plate region with any
Optical Character Recognition, and generate localized information of the number
plate with or without in other relevant composite information of car (type, possible
driver snapshot, shape and contour of the vehicle) in parallel to monitor traffic and
an intelligent video analytical application for event detection based on the video feeds
An intelligent traffic enforcement system as above wherein the process depends
localizes possible license plate in the field of view of the camera by (a) analysing
statistically correlation and relative contrast between the number plate content
region and the background region surrounding this content, (b) unique signature of
number plate content based on pixel intensity and vertical and horizontal distribution,
(c) color features of the content and surrounded background.
An intelligent automated traffic enforcement system as above wherein said video
analytic process is carried out in the sequence involving (a) configuration means ( b)
incident detection means (c ) incident audit means (d ) reporting generation means
(e) synchronization means and (f) user management means.
An intelligent automated traffic enforcement system as above wherein said
configuration means adapted to configure parameters for incident detection and
management comprises (i) camera configuration means (ii) means for providing for
virtual loops in regions where monitoring is required(iii) means for setting time limits
for the monitoring activity (iv) means providing feed indicative of regular traffic
moving directions for each camera (v) means providing for setting speed limits to
detect over speeding vehicles (vi) means for setting the sensitivity and duration
determining traffic abnormality and congestion.
An intelligent automated traffic enforcement system as above wherein said incident
detection means is adapted to detect deviations from set parameters, analyze
appropriate video feed and check for offence involving (a) recording by way of saving
video feeds from various traffic locations of interest (b) generating alarm including
alerts and/or notifications visual and/or sound based on any incident detection
involving traffic violation and (c ) registering the incident against the extracted
corresponding license plate number of the violating vehicle.
An intelligent automated traffic enforcement system as above wherein said incident
audit means comprises :
filter means adapted to reach to the incident if incident is an archived incident and in
case of live incident means for viewing the details;
means for generating details of the incident, a link to incident video and a link to
license plate image of the vehicle;
means for verification of the incident by playing the video and vehicle's registration
number by viewing the license plate image and If the license plate number is
incorrect means to enter the correct vehicle number of the incident image;
means for updating incident status changed from wPending"/MAcknowledged" to
"Audit" and saving into the database.
means to enter remark about the action taken while auditing the incident and finally
the remark is saved in the database with possible re- verification for future
reference.
An intelligent automated traffic enforcement system as above wherein said incident
reporting means comprises means for automatized generation of incident detail
reports and incident summary report and generation of offence report.
An intelligent automated traffic enforcement system as above wherein said
synchronization means includes means adapted for synchronization with handheld
device applications.
An intelligent automated traffic enforcement system as above wherein said user
management means includes interface for administrative functions including (a) user
creation and management (b) privilege assignment and (c) master data
Management.
The above disclosed invention thus includes advancement based on bandwidth
adaptive data transfer with predicted optimal bandwidth sharing among multiple data
transfer processes for low or moderately priced systems. During data upload to
central storage system, each server not only monitors the available bandwidth but
also in-flow rate for each channel into the server separately. It is done without
compromising subjective fidelity of the data, and accordingly adjusts the upload rate
for any particular channel without affecting the speed and performance of other
channels being processed by multiple networked servers let alone the single server.
The data stream is segmented into variable sized smaller chunks or clips and the rate
of uploading the clips to the central storage is adjusted depending on the available
network bandwidth and data inflow rate for that particular channel which is
dependent on the scene activity or content characteristics. Calculation of data upload
rate as a function of both system capacity and incoming data accumulation rate is
novel and unique. This utilizes the system resources in an optimal way. Moreover, the
whole architecture is protected from any single point failure of any component in the
network (server, storage, and others) explained below.
An advancement is proposed under the present invention wherein the fail-safe
mechanism is designed without a central server and support from any dedicated
failover or mirror server. Instead of allocating a particular data source (e.g., a camera
and other sensors) to a particular server for recording of data (e.g, video or other
data types), it is allocated to a 'Server group' with multiple servers in the group. The
members of the group continuously and mutually exchange their capacity information
amongst themselves and automatically share the load according to their capacity. In
case of breakdown of one or more servers, the team members automatically detect it
and share the load of the failed server(s), without any central control or without
support from any fail-over or mirror server. This eliminates the need for costly
failover or mirror server and the load is always evenly distributed as per the capacity
of the individual server hardware. This advancement is unique serve as an example
of cooperative social networking implemented in machine level.
Also disclosed is an enhanced multi channel data aggregation technique for data
transmission over low and variable bandwidth communication network has been
proposed which also avoids inter-channel interferences. While transmitting multi-
channel video over low and variable bandwidth network link, they are combined into
a single channel video, frame by frame, and then transmission bit rate is controlled
to avoid jittery video at the other end or interference between individual channels. It
also avoids starvation for any single channel. In this process, the underlying data
compression algorithm is intelligently handled without affecting the decoding process
with a standard equivalent decoder. For example in case of video, the motion vector
generation step in the underlying MPEG type compression is intelligently controlled,
so that no motion vector crosses-over the intra-frame boundary in the combined
frame. This eliminates interference between any two channel data frames in the
combined frame. This technique of bandwidth adaptive multi-channel data transfer
without inter-channel interference is also a clear advancement in the related art
achieved by the present invention.
The invention also propose a monolithic architecture by integrating video analytics
functionalities as integral part of the proposed Video Management System
architecture with same architectural and design philosophy. That's why the overall
architecture is called a truly Intelligent Video Management System architecture. In
this architecture Controller module controls the rate at which video frames are
supplied to different analytics engines. The Controller uses a novel technique to
control the rate of decoding the video frames and sending them to the Analytics
engine for content analysis based on computational bandwidth available and also on
the current scene complexity measure as received from the Analytics engines
themselves. Hence, number of frames decoded and sent per second for each video
channel is individually and automatically controlled depending on the requirement of
the Analytics engine and also on the computational bandwidth available in the
system at any given point of time. This adaptive frame rate control mechanism for
analytics processing based on scene complexity is unique and a clear advancement in
the related art.
The present invention further discloses advancement in process for analyzing moving
image sequences, which comprises applying automatic adaptive unified framework
for accurate predictive colour background estimation using neighbouring coherent
colour and inter-frame colour appearance correlation under severe natural condition
such as shadow, glare, colour changes due to varying illumination, and effect of
lighting condition on colour appearance, electronics generated induced noises (e.g.
shot noise, but not limited to) obtain more accurate object shape, contour & spatial
position. With the present invention, the object detection and analysis process can be
accelerated and the foreground selection accuracy can be improved. Using this
advanced method detected objects can be characterized, classified, tracked and
correlated to identify different events in any natural video sequence under various
demographic and environmental conditions.
The invention further enables advancements in Static Foreground Pixel estimation
technique using multi-layer hierarchical estimation to identify static objects in a video
by aggregation of static pixels in parallel to other moving colour objects in the scene.
The process involves background scene estimation, foreground background
segmentation, short time still background estimation, static foreground pixel
estimation and then static object generation. The proposed technique is thus an
advancement in the related art and it gives much more control over the process of
distinguishing foreground pixels (of the static object) from the background pixels.
The present invention is also on method to enhance the efficiency of extracting face
regions from a sequence of video frames. Also, depending on the availability of
computational bandwidth, the number of iterations and pixel shifts as required in the
proposed technique is controlled with the help of a look up table. This helps in
striking a balance between the computational requirement and the accuracy of face
detection. In a multi-channel, multiple analysis process system, this advanced
technique can be used as a cooperative process coexisting with other compute
intensive processes. In the proposed technique, the search space is reduced by
considering the motion vector and sliding the window only in the blob regions where
motion is detected. First, the average time t to analyze an image in host machine is
calculated, and for subsequent frames pixel-shifts and number of iterations are
calculated based on two lookup tables, to suite the computational bandwidth. To
increase the accuracy, a second pass upon the probable face regions detected by first
pass is performed. This concept of increasing the accuracy of data analysis
automatically depending on available computational bandwidth is novel and unique.
The framework disclosed herein can be used for such situations, and also for
integrating multiple heterogeneous systems in a distributed environment. The
proposed architecture is versatile enough to interface and scale it to many other
management systems. By way of a non-limiting example the disclosure made herein
illustrates how the systems architectural advancement can be advantageously
involved for Intelligent Automated Traffic Enforcement System.
The details of the invention and its objects and advantages are explained hereunder
in greater detail in relation to the following non-limiting exemplary illustrations as per
the following accompanying figures:
Brief Description of the Drawings
Fig 1: is a schematic layout of an illustrative embodiment showing an integrated
intelligent server based system of the invention having sensory input/data
acquisition cum recording server group and /or analytics server group adapted to
facilitate fail-safe integration and /or optimized utilization of various sensory inputs
for various utility applications;
Fig 2 : is an illustrative top level view of intelligent video management system with
framework for multiple autonomous system integration;
Fig 3: is an illustration of fail-safe bandwidth optimized recording without any
supporting failover support server in accordance with the present invention;
Fig 4.: is an illustration of the dataflow diagram from a single video source through
the recording server ;
Fig.4A to 43: illustrate an exemplary Intelligent Home Security" box involving the
system of the invention;
Fig.5 : is an illustration of the single channel data flow in video analytical engine in
accordance with the present invention;
Fig.6: is an illustration of intelligent video analytics server in accordance with the
present invention;
Fig.7 : is an illustration of video management interface functionalities in accordance
with the present invention;
Fig.8: is an illustration of the number plate recognition engine components in
accordance with the present invention;
Fig.9: is an illustration of the localized multiple number plate regions in video images
in accordance with the present invention;
Fig.10: is an illustration of top level system diagram in accordance with the present
invention;
Fig.lliis an illustration of the flow diagram in accordance with the surveillance
system in accordance with the present invention;
Fig.12: is an illustration of the video analytics application breakdown structure in
accordance with the present invention;
Fig.13: is an illustration of the junction camera set up in accordance with the present
invention;
Fig.14: is an illustration of the junction layout in accordance with the present
invention;
Fig.15: is an illustration of the video recording during working hours in accordance
with the present invention;
Fig.16 : is an illustration of the transition traffic light status in accordance with the
present invention;
Fig.17: is an illustration of the captured number plate in accordance with the
invention;
Fig.18: is an illustration of the incident audit view in accordance with the present
invention.
Detailed Description of the Invention:
Reference is first invited to accompanying figure 1 which shows the broad overview
of an illustrative embodiment showing an integrated intelligent server based system
having sensory input/data acquisition cum recording server group and /or analytics
server group adapted to facilitate fail-safe integration and /or optimized utilization
of various sensory inputs for various utility applications. More specifically, the system
involves the method for bandwidth adaptive data transfer to central storage cluster in
accordance with the present invention. The following description in relation to figures
1 to 7 deals with the utilities of the advancement in an integrated intelligent server
based system and further in relation to figures 8 to 18 further illustrates the manner
of effecting the stated intelligent automated traffic enforcement system in accordance
with the present invention.
As would be apparent from the figure 1 the system basically involves the self-reliant
group of recording servers (101), the group of analytical servers (102) and an
intelligent interface (103).Importantly, said recording servers apart from being
mutually cooperative and self-reliant to continuously monitor and distribute the
operative load based on the number of active servers in the group are also adapted
for bandwidth optimized fail-safe recording ((104 ) and join-split mechanism for multi
channel video streaming ( 105).
The analytical servers (102) are also adapted to cater to atleast one of more of
background estimation (106), identifying moving, static, quasi static objects ( 107),
enhanced object tracking (108), content aware resource scheduling ( 109) , join-split
mechanism for sensory date streaming (110) and resource dependent accuracy
control (111).
The various components of the above system adapted to carry out the above
advanced functionalities in accordance with the present invention is further outlined
and schematically described in Fig 2:
1. Intelligent Video Management System (204)
1.1. Video Recording Server (201)
1.2. Video Management Interface (203)
1.2.1. User management and Client access controller
1.2.2. Event concentrator and Handler (206)
1.2.3. Event distributor
2. Intelligent Video Analytics Server (202)
3. Surveillance Client (207)
4. Web client (207)
5. Mobile device Client (207)
6. Remote Event Receiver ( 206 )
As is clearly apparent from Figure 2, the present system would enable seamless and
intelligent Interconnection of multiple Autonomous Systems (210-01;210-02... 210-
On). Thus at the same time, multiple such Autonomous Systems can be used as
building blocks for a distributed system spanning across wide geographical regions
under different local administrative control, with a Centralized view of the whole
system from a single point. An Autonomous system (210-01)) is considered as a
system capable to implement the functionalities and services involving sensory data
and /or its analysis.
Also, the system is capable of handling any sensory data/input and it is only by way
of an illustration but not by way of any limitations of the present system that the
various exemplary illustrations hereunder are discussed with reference to video
sensory data. The underlying system architecture/methodology is applicable in other
sensory data types for a true Intelligent Sensor Management System .
A number of machine vision products spanning the domain of Security and
surveillance, Law enforcement, Data acquisition and Analysis, Transmission of
multimedia contents, etc can be adapted to one or more or the whole of the system
components of the present invention.
Reference is now invited to accompanying figure 3 which shows by way of an
embodiment a fail-safe bandwidth optimized recording without any failover support
server. As apparent from said figure, for the purpose the input from the pool of
sensors (305) are fed not to any single server but to a group of servers
(301).Importantly , communication channel (303) is provided to carry inter-VRS
communication forming a team towards failover support without any central
management and failover server while the communications channel (302) is provided
to carry data to central storage involving intelligent bandwidth sharing technique of
the invention.
The implementation of the Recording System :
The Recording system essentially implements the functionalities and services as
hereunder:
1. Collecting Data real time: Collect data from various images, video and
other sensory sources, both on-line and off-line, archiving and indexing
them to seamlessly map in any relational or networked database in a
fail-safe way making optimal usage of computing, communication and
storage resources, facilitate efficient search, transcoding,
retransmission, authentication of data, rendering and viewing of
archived data at any point of time.
2. Streaming data real time or on Demand: Streaming video and other
sensory content in multiple formats to multiple devices for purposes like
live view in different matrix layout, relay of the content, local archiving,
rendering of the sensory data in multiple forms and formats, etc. by a
fail-safe mechanism without affecting speed and performance of on-
going operations and services.
The Video Recording system is implemented using hardware and software, where the
hardware can be any standard computing platform operated under control of various
operating systems like Windows, Linux, MacOS, Unix, etc. Dependence on hardware
computing platform and operating system has been avoided and no dedicated
hardware and communication protocol has been used to implement the system.
Recording server implements an open interface both for input and output, (including
standard initiatives by various industry consortium such as ONVIF, PSIA, etc.), and
can input video feed from multiple and different types of video sources in parallel,
with varying formats including MPEG4, H.264, MJPEG, etc. OEM specific SDKs to
receive video can also be used. Internal operating principle of the Recording server is
outline below:
Recording Server operating principle is adapted for the following:
1. Auto register itself to the IVMS system so that other components like VMS,
Surveillance Clients, other VRSes can automatically find and connect it even
when its IP-address changes automatically or manually.
2. Form a group with other VRS in the system to implement a failover support
without any central control and without support from any dedicated failover
server.
3. Accept request from VMI to add and delete data sources including video
sources like cameras, receive data from those input sources over IP-network
or USB or other connectivity, wired or wireless, using open protocols or SDKs
as applicable for a particular data source.
4. Record the video and other sensory data in local storage either continuously or
on trigger from external devices including the data source itself or on trigger
from other components of the Video management system or on user request
or on combination of some of the above cases
5. Intelligently upload the video or other sensory data in a cluster of storage
devices, where a cluster contains of one or more network accessible storages,
in an efficient way giving fair share to individual data sources, utilizing
optimal bandwidth and in a cooperative way.
6. Insert information in database so that the data including video data can be
searched easily by any component in the system.
7. Stream the video or other sensory data in their original format or in some
other transcoded format to other devices including the Surveillance clients
when the surveillance client connects it using defined protocol.
Auto registration of servers:
All the servers in the system, including the Recording servers, auto register
themselves by requesting and then getting a unique Identification number (ID) from
the VMI. All the configuration data related to the server including the identification of
data sources including the video sources it caters to, the storage devices it uses, etc
are stored in the database against this ID. This scheme has the advantage that with
only one Static IP address (that of the VMI), one can access any component of the
Autonomous System (AS), and the IP addresses of the individual hardware
components may be kept varying.
Recording Video or other sensory data in local storage and streaming the data to
Client machine:
The cameras, other video sources or sources generating streaming data (henceforth
called Channels) can be auto detected or manually added to the VRS. The details of
the channels are stored in the Central Database. Once done, one or more channels
can be added to the Recording System. The Recording system thus comprises of one
or more Recording servers (VRS) and the Central Database Management System.
VRS-es consults the database, know about details of the system, and records the
channel streaming data either continuously, or on trigger from any external or
internal services, as configured by the user.
The data stream is first segmented into small granular clips or segments of
programmable and variable length sizes (usually of 2 to 10 minutes duration) and the
clips are stored in the Local storage of the server, the clip metadata being stored in
local database.
Reference is invited to accompanying figure 4 which shows the dataflow mechanism
in accordance with the invention from a single video service through the recording
server. As apparent from Figure 4, the sensory data stream viz. video (405) is feed
to a data segment generator (401) which is next stored in segments in local storage
(403/402) and thereafter uploaded through data upload module (404) to a central
storage (406)/407).
Any external component of the system can enquire the VRS to know about the details
of the channels it is using and get the data streams for purposes like live view,
Relaying to other devices etc using a networked mutual client-server communication
protocol
Bandwidth adaptive data uploading to central storage system
In the system of the invention, an efficient technique has been designed to transfer
video or other sensory data received from the channels to the central storage system
via the local storage. Instead of allocating a particular data source (e.g., a camera) to
a particular server (dedicated point to point) for recording of data (e.g, video), it is
allocated to a 'Server group' with multiple servers in the group [Fig 3]. The members
of the group exchange their capacity information amongst themselves and share the
load according to their capacity. In case of breakdown of one or more servers, the
team members share the load of the failed server(s), without any central control or
without support from any dedicated fail-over server. For data uploading, each server
not only monitors the available bandwidth but also the data inflow rate for each
channel into the server, and accordingly adjusts the upload rate for an individual
channel. For the purpose the data stream is segmented into variable sized clips and
the rate of uploading the clips to the central storage is adjusted depending on the
available network bandwidth and data inflow rate for that particular channel [Fig
4].As shown in the figure, the sensor data stream ( 405) is segmented in data
segment generator (401) which is next stored in local storage ( (402 ,403) and
thereafter involving a data upload module (404) the same is sent to the central
storages ( 406/407).
Implementing fail-over support without any dedicated failover server and mirror
central control
The system of the invention is further adapted for back up support in case of server
failure without the involvement of any special independent stand by support server.
Traditionally (prior art), dedicated fail-over servers are used which senses the
heartbeat signals broadcasted by the regular servers. Once the heart beat is found
missing, the failover server takes up the task of the failed server. This technique is
inefficient as it not only blocks the resources as dedicated failover servers, but cannot
utilize the remaining capacity of the existing servers for back up support. Also, failure
of the failover server itself jeopardizes the overall failover support system.
In the proposed system the recording servers exchange information amongst
themselves so that each server knows the leftover capacity and the channel
information of every other server. In case of server failure, the remaining active
servers distributes the load amongst themselves.
The Implementation of the Video Analytics System
The Video Analytics System essentially implements the functionalities as hereunder:
1. Data Content Analysis: Intelligently analysing the data, on-line or off-line, to
extract the meaningful content of the data, identifying the activities of
foreground human and other inanimate objects in the scene from the sensor
generated data, establishing correlation among various objects (living or
non-living) in the scene, establishing correlation amongst multiple types of
sensory data, identifying events of interests based on the detected activities-
— all either automatically or in an user interactive way under various
demographic and natural real life situations. Several novelties have
described in the relevant sections describing the details of the data content
analysis techniques.
2. Automatic Alert Generation: Generating Alerts, signals, video clips, other
sensory data segments, covering the events automatically as and when
detected.
The Video Analytics system comprises hardware and software, where the hardware
can be any standard computing platform operated under control of various operating
systems like Microsoft Windows, Linux, MacOS, Unix, RTOS for embedded hardware,
etc.
Dependence on hardware computing platforms and operating systems has been
avoided and no dedicated closed hardware needs to be used to implement the
system. At the same time, part or whole of the system can be embedded into other
products with some existing services, without affecting those services.
An example is provided in the form of "Intelligent Home Security" box shown in
Figures 4A to 43 where a specially built hardware is used to provide several services
viz, Digital Photo-frame, Perimeter security, Mobile camera FOV recording & relay,
Live view of cameras, etc.
Referring to FIG. 4A, a schematic diagram of a Networked Intelligent
Villa/Home/Property Monitoring System is shown. All of the intelligent video
management server and intelligent monitoring applications that are described in
previous sections have been embedded into the Videonetics Box. The Box has an
easy to use GUI using touch-screen so that any home/villa/property owner can easily
operate it with minimum button pressing using visual display based instructions only.
The top level systems architecture for the embedded hardware and details of the
components in the hardware system is shown in FIG. 4B.
The following is a micro-architectural components summary for an example of a
multi-channel IP-camera solution. Video from IP-Cameras is directly fed to the
computer without the requirement of any encoder. There are three options: One, no
network switch is required. The Motherboard should have multiple Ethernet ports;
two, the Motherboard has only one Ethernet port assuming all the cameras are
wireless IP-Cameras. The Motherboard should have 1 x Ethernet port and 1 x Wifi
interface; and three, the Motherboard has only one Ethernet port, the cameras are
wired, but a Network switch is required as an external hardware.
On detection of events the following tasks are performed:
a siren blows;
an SMS/MMS is sent;
event clip is archived; and
the event dip is also streamed to any designated device over the Internet.
The following Interfaces are required to handle the above tasks: at least one RELAY
O/P for siren drive or DIO for Transmitter interface; and a 3G interface for SMS/MMS
or sending event clip to Cell Phone. Other usual hardware includes:
a. USB;
b. Touch Screen Interface;
c. external storage;
d. 3G dongle, if 3G is not embedded into motherboard;
e. keyboard, if touch screen is not attached; and
f. DVI port for display.
The following is a micro-architectural components summary for an example of a
multi-channel analog camera solution. Video from analog camera is received by an
encoder hardware. The encoded RAW image is fed to the computer for processing.
System Hardware should be capable to handle the following activities:
1. multi channel encoding, each at 15 - 30 fps for Dl size, but not limited to, higher
frame rate and higher resolution as long as computing bandwidth supports this frame
rate and resolution video data
a. Input to encoder: Analog video in NTSC or PAL
b. Output from encoder: YUV or RGB
There are two options:
a. The encoder could be a separate module connected to motherboard
through PCIE
b. The encoder circuitry may be embedded in the mother board
2. On detection of events following tasks are performed:
a. A siren blows
b. An SMS/MMS is sent
c. Event clip is archived
d. Event clip is also streamed to any designated device over Internet
The following hardware Interfaces are required to handle the above tasks:
a. At least one RELAY O/P for siren drive or External Transmitter interface
(DIO)
b. 3G interface for SMS/MMS or sending event clip to Cell Phone.
c. Ethernet for remote access to the system
3. Other usual hardware:
1. USB :
a. Touch Screen Interface
b. External Storage
c. 3G dongle, if 3G is not embedded into motherboard
d. keyboard if touch screen is not attached
e. DVI port: for Display
Referring to FIG. 4C, a top level heterogeneous system architecture (both IP and
analog cameras) is illustrated. Referring additionally to FIGS. 4D-4J an operational
flow by a user and representative GUI using a touch panel display of the intelligent
monitoring system is detailed in a step-by-step flow.
Thus, a new and improved intelligent video surveillance system is illustrated and
described. The improved intelligent video surveillance system is highly adaptable
and can be used in a large variety of applications can be conveniently adapted to a
variety of customer-specific requirements. Also, the intelligent video surveillance
system is automated, intelligent, and requires a minimum or no human intervention.
Various changes and modifications to the embodiment herein chosen for purposes of
illustration will readily occur to those skilled in the art. To the extent that such
modifications and variations do not depart from the spirit of the invention, they are
intended to be included within the scope thereof.
The Analytics Engine
Various rule sets for inferencing the dynamics of the data (interpretation of Events)
are defined inherently in the system or they can be defined by the users. An Analytics
engine detects various activities in the video or other sensory data stream and on
detection of said activities conforming to one or more Events, sends notification
messages with relevant details to the recipients. The recipients can be the VMI, the
central VMS or Surveillance Clients or any other registered devices. To perform the
above tasks, the scene is analyzed and the type of analysis depends on the type of
events to be detected.
The data flow within the Analytics Engine for a single channel, taking video stream as
the channel data, is as schematized below [Fig. 5],. The functionalities of various
internal modules of the Analytics Engine and other components are described below,
taking Video channel as an example for Sensory data source.
(A) Scene Analyzer (501) : The Scene analyzer is the primary module of the
Analytics engine and that of the IVAS as well. Depending on the Events to be
detected, various techniques have been developed to analyze the video and sensory
data content and extract the objects of interests in the scene or the multi-sensory
acquired data. Importantly, the scene analyzer is adapted to analyze the content of
the media(e.g, video) based on intelligent scene adaptive colour coherent object
analysis framework and method . Implementation of the same has been done so
that it is adaptive to the availability of computational bandwidth and memory and the
processing steps are dynamically reconfigured. As for example, as described further
in detail hereunder a trade-off is done automatically by the Analytics engine to strike
a balance between the accuracy of face capture and the CPU clock cycles available for
processing.
The Scene Analyzer generates meta-data against each frame supplied to it for
analyzing. It also computes the complexity of the scene using a novel technique and
dynamically reconfigure the processing steps in order to achieve optimal analysis
result depending upon the availability of the computational and other resources for
on-line and real-time detection of events and follow up actions. It feeds the metadata
along with the scene complexity measure to the Controller, so that the Controller can
decide the optimal rate at which the frames of that particular video channel should be
sent to the Analytics engine for processing. This technique is unique and saves
computational and memory bandwidth for decoding and analysis of the video frames
(B) Rule Engine (502): The Rule Engine keeps history of the metadata and correlates
the data across multiple frames to decide behavioural patterns of the objects in the
scene. Based on the rules, various applications can be defined. As for example it is
possible to detect whether a person in jumping a fence or whether there is a
formation of crowd or whether a vehicle is exceeding the speed limit, etc.
(C) Event Decider (503): The behavioural patterns, as detected by the Rule Engine is
analyzed by this module to detect various events in parallel. The Events can be
inherently defined or it may be configured by the user. As for example, if there is
crowd formation only in a specific zone whereas other areas are not crowded, that
may be defined to be an Event. Once an Event is detected, a message is generated
describing the type of event, time of occurrence of the Event, the location of
occurrence of the Event, the Video clip URL, etc.
The Event decider can also control any external device including a PTZ camera
controller which can focus a region where the event has taken place for better
viewing of the activities around that region or recording the scene in a close up view.
One such advanced framework is detailed hereunder as enhanced object tracking
where the utility of an Object tracking system is enhanced using a novel technique
using a PTZ camera along with the Object tracking system.
The Analytics Engine Controller
A Controller module (602) as shown in Figure 6 has been designed which can receive
multiple video channels, possibly in some compressed form (e.g, MJPEG, Motion
JPEG2000, MPEG, H.264, etc. for video and relevant format for other sensory data
such as MP4 for audio, for example but not limited to), and feeds the decoded video
frames to the Analytic engine. The Controller uses an advanced technique to decide
the rate of decoding of the frames and feed the decoded video frames of multiple
channels to the Analytics engine in an optimal way, so that the number of frames
sent per second for each video channel is individually and automatically controlled
depending on the requirement of the Analytics engine and also on the computational
bandwidth available in the system at any point of time. The technique has been
described in detail in relation to video content driven resource allocation for analytical
processing.
The Controller also streams the video along with all the Video Analytics data (existing
configuration for Events, Event Information, video clip URL etc), either as individual
streams for each channel, or as a joined single stream of video data for all or user
requested channels. A novel technique for joining the video channels and
transmitting the resulting combined single channel over IP network has been
deployed to adapt to varying and low bandwidth network connectivity. The technique
is described in detail in relation to video channel join-split mechanism for low
bandwidth communications.
The Controller can generate Events on its own for the cases where Events can be
generated without the help of Video Analytics engine (eg, Loss of Video, Camera
Tampering as triggered by Camera itself, Motion detection as intimated by the
Camera itself, as so on).
The implementation of Video Management Interface (VMI)
The Video Management Interface (702) is shown in figure 7 which interfaces between
an individual Autonomous System and rest of the world. It also acts as the
coordinator among various other components within a single Autonomous system,
viz, Video Recording System (703), Intelligent Video Analytical Server (704),
Surveillance Clients (701), Remote Event Receiver (705), etc. [It essentially
implements the functionalities including:
1. Filtering and need based transmission of data: Distribution of whole or part
of the collected sensory data, including the video and other sensory data
segments generated as a result of detection of an Event by the Analytical
engine, at the right recipient at the right point of time automatically or on
user interaction.
2. Directed distribution of Alerts: Distributing Event information in various
digital forms (SMS, MMS, emails, Audio alerts, animation video, Text,
illustrations, etc. but not limited to) with or without received data segments
(viz, video clips) to the right recipient at the right point of time
automatically or on user interaction.
3. Providing a common gateway for heterogeneous entities: Providing a unified
gateway for users to access the rest of the system for configuration,
management and monitoring of system components.
The Interface operating principle involved in the system is discussed hereunder:
1. Auto register itself to the IVMS system so that other components like
Surveillance Clients (including Web Clients and Mobile Clients), Remote Event
Receivers, can find and connect it even when its IP-address changes;
2. Accept request from Surveillance clients to add and delete data sources like
cameras to the VRSes and IVASes and relay the same to the corresponding
VRSes and IVASes.
3. Receive configuration data from the Surveillance clients and feed them to the
intended components (viz, VRS, IVAS, DBMS, Camera etc) of the system. For
VRS , the configuration data includes Recording parameters, Database paths,
Retention period of recording, etc. For IVAS, it is the Event and Application
settings, Event clip prologue-, after event- and lifetime-duration, etc.
4. Receives Event information from IVAS on-line and transmit it to various
recipients including Remote Event Receivers. Fetch outstanding Event clips, if
any, from IVAS. Outstanding clips may have been there inside IVAS, in case
there was a temporary network connectivity failure to IVAS.
5. Periodically receive heartbeat signals along with status information from all
the active devices, and relay that to other devices in the same or in other
networks.
6. Serve the Web clients and Mobile embedded clients by streaming Live video,
Recorded Video or Event Alerts at the right time.
7. Join multiple channel video into a single combined stream to adapt to variable
and low bandwidth network. A novel technique for joining the video channels
and transmitting the resulting combined single channel over IP network has
been deployed to adapt to varying and low bandwidth network connectivity.
The technique is described in relation to video channel join -split mechanism
for low bandwidth communication.
8. Enable the user to search for the recorded video and the Event clips based on
various criteria, including Data, Time, Event types, Video Channels.
9. Enable the user to perform an User-interactive Smart search to filter out
desired segment of video from video database
In essence, once the Interface (702) is installed, the VRS (703), IVAS (704) and
other components of the system can be configured, and the user can connect to the
System. However, at run time all the VRS and IVAS can operate on their own, and do
not require any service from the VMI, unless and otherwise some System
configuration data has been changed.
Independence for of the servers from any Central controller for their routine
operation gives unprecedented scalability with respect to increase in number of
servers. This is because, it does not add any extra load to any other component than
the server itself. This is a unique advancement where the Video Management Server
Interface acts only as a unified gateway to the services being executed in other
hardware devices, only for configuration and status updating tasks. This opens up the
possibility of keeping the User interface software unchanged while integrating new
type of devices. The devices themselves can supply their configuration pages when
the VMI connects to them for configuration. Similarly, the messages generated by the
servers can also be shown in the VMI panel seamlessly.
The Video Management Client(701), Web client(707), Mobile device embedded
client(708)
All the above client modules in essence implement the functionalities including:
Providing Live view or recorded view of the data stream: Enabling user to view
camera captured video in different matrix layouts, view other sensory data in a
presentable form, recorded video and other data search and replay, Event clips
search and replay, providing easy navigation across camera views with help of
sitemaps, PTZ control, and configuring the system as per intended use.
The VMS system can be accessed through the standalone surveillance client or any
standard Internet browser can be used to access the system. Handheld devices like
Android enabled cell phone or tablet PCs can also be used as a Client to the system
for the purposes (wholly or partially) as mentioned above.
The Remote Event receiver (705)
RER (705) shown in Figure 7 is the software module which can be integrated to any
other modules of the IVMS. The Remote Event Receiver is meant to receive and
display messages and ALERTs from other components, which are multicast or
broadcasted. Those messages include Event ALERTS, ERROR status from VRS or
IVAS, operator generated messages, etc. The Messages can be in the Video as well as
Audio form, or any other form as transmitted by the Video management system
components and the resulting response from by the RER depends on the capability
and configuration of the hardware where the RER is installed. When integrated with
the Surveillance clients (IVMC), the IVMC can switch to RER mode and thus will
respond to ALERTs and messages only.
The Central VMS system
Central VMS System (204 in Figure 2) is adapted to serve as a gateway to any
Autonomous System (210-01...210-0n) components. It also stores the configuration
data for all ASes in its Centralized database. It is possible to integrate otherwise
running independent VMS systems into a single unified system by including Central
VMS in a Server and configure that accordingly.
The Sitemap Server
A Sitemap server is included within each Autonomous System (210-01...210-0n) and
also within the Centralized VMS(204 in Figure 2). The Sitemap server listens to
requests from any authorized components of the System and responds with positional
data corresponding to any component (Camera, server, user etc.) which is linked to
the Site map. The Site map is multilayered and components can be linked to any
spatial position of the map in any layer.
The above describe the framework, architecture and system level components of the
Intelligent system of the invention. The technology involved in the development of
the system can be used to integrate various other types of components not shown or
discussed above. As for example, an Access Control System or a Fire Detection
System can be integrated similar to VRS or IVAS, configured using IVMC and VMI,
and their responses or messages can be received, shown or displayed and responded
to by IVMC or RER, stored as done for Event clips or Video segments and searched on
various criteria.
The system of the invention detailed above is further versatile enough to interface
and scale to many other management systems such as the involvement in intelligent
automated traffic enforcement system also discussed in later sections.
Reference is now invited to accompanying figures 8 to 18 which illustrates in detail
an intelligent and automatic traffic enforcement system built in accordance with the
advancement of the present invention including components/ features/ stages 2401
to 2409 in Figure 8 ,2501 to 2512 in Figure 9,2601 to 2605 in figure 10, 2701 to
2704 in figure 11 ,2801 to 2818 in figure 12.
Traffic signal violation is a burning traffic enforcement issue throughout the world.
Beyond optimistic illusions, ground realities are too fierce to be accepted, as the
fearsome road accident, traffic jam are the main effect of the same. Seeds of
improvement are however being planted at all possible arenas but they are very
costly and high human resource consuming too. The proposed system describes an
Intelligent Automated Traffic Enforcement System.
Following are the regular challenges for the road transportation department at the
different road junctions:
Ensuring that the rules and regulations are followed by each and every vehicle
crosses any junction at any point of time.
Enhance Road safety for all types of vehicles and as well as pedestrians.
Road transportation department requires intelligent automatic enforcement system
for the surveillance in each traffic junction and for the on-field enforcement team,
allowing them to book offences and access other Transport department application's
events in real time.
Smooth traffic flow within city / country.
The present advancement is targeted a the following:
CCTV IP cameras and Video Analytics Applications using virtual loops (opposed to any
physical magnetic Loop) for automatic detection of offences like 'red signal violation,
'over speeding', 'wrong way vehicle movement' in every important junction and
integrated with the remote traffic control room.
Smart phone solution for the on-field enforcement team allowing them to book
offences and access other Transport Department Application's events via GPS / GPRS
enabled Mobile / Handheld devices.
Setting up of the Control room for backend activities with complete hardware,
software solution and networking.
The additional data center hardware set-up for Road Transportation Department to
store evidence / archive data for all the relevant events.
Connectivity management in real time by data transfer between the above
components to ensure synchronized communication.
The proposed intelligent automated traffic enforcement system of the present
invention can help the traffic management department to identify the violation by
traffic department personnel by remotely observing the video feeds coming to the
control room from the junction through computer monitor. Alternately, it can be
automatically detected by our proposed system and automatically alert a traffic
personnel without physically being present at the traffic junction or sitting in the
control room. Videonetics proposed system does not require any specialized or
proprietary camera to detect these violations. It analyzes video feed from traditional
security cameras in a computer to detect the events. Security cameras are installed
at strategic locations around the traffic junction in such a way so that video analytic
engine can capture and process the video to detect the violating vehicles,
automatically find the identity of the vehicle such as Number Plate, shape, size, color,
logo, type of the vehicle, and possibly the snapshot of the driver if visible. The engine
then automatically stores these information and images of those vehicles in event log
database. The traffic inspector can identify possible violations like red light violation,
over speed vehicle, wrong way vehicle, vehicle rider without helmet, without wearing
seat belt, using mobile phone while driving, motorcycle with more than two
passengers, etc. either by automated video analytic application or manually through
computer monitor. Images can be manually tagged with comments by the traffic
personnel or automatically tagged with possible violation type, and can be manually
or automatically sent to handheld devices of on-field enforcement team through
communication network for subsequent physical action and are also kept in database
for future use.
Exemplary illustrative components of The proposed solution:
The proposed solution consists of SEVEN major COMPONENTS.
Number plate recognition engine (NPR - Engine)
Object presence detection engine (OPD - Engine)
Control Room setup and handheld devices.
Installation of'CCTV IP Cameras' for the Video Surveillance System.
Synchronized Communication of Traffic Junction to Control Room and / or Traffic
Junction to handheld device.
Automatic event detection by intelligent Video Analytical Application software.
Detected Event Recording as evidence for future use.
Communication between event server and peripheral devices of the system.
The top level Number Plate Recognition (NPR) -Engine flow chart is provided in
accompanying figure 24.The method to localize multiple number plate regions in
video images is shown in accompanying figure 25.
As would be apparent following the localization technique shown in Figure 25 the
same basically follows as hereunder:
Find the average height (h) and width (w) of a typical character in the field of view.
Compute a gray image G,
(x,y). A(x,y) is the average of all the pixels in a 2-dimensional window of size (h,
w) centring (x,y) ,
Binarize gray image G to a binary image B.
Extract possible characters in the image plane and group them to construct number
plates as follows.
Find all the connected components in B and remove the components significantly
smaller than a typical character of size (h, w). This removes significant amount of
non-character regions to select connected components representing possible
characters.
Compute the standard deviation (a) of grey values of pixels in a region in G
representing a possible character. Ignore the connected components with too small a
values to further remove non-character regions.
Depending upon the quality of the image, sometimes a single character can be split
into multiple subcomponents. Merge possible such subcomponents. Two
subcomponents are merged if the central point of those subcomponents fall in a
vertical line and centre distance is small.
Discard possible isolated characters. For a true number plate region, there will be
number of contiguous characters in the region.
Group the characters whose centre points belong to the same horizontal line. Find all
the groups. Discard the groups which have significantly less number of characters
than a typical number plate.
Check previously deleted list of possible isolated characters and check whether
inclusion of any such character to a nearby group can form a possible number plate.
Depending upon the type of font and Number plate writing style, sometimes grouped
characters can be split into multiple sub-groups. Merge possible sub-groups. Two
sub-groups are merged if the sub-groups fall in a horizontal line (case of split group)
or vertical line (case of multi-line number plate).
Compute color feature of each character in a group and for the over all group. By
comparison of the color feature validate all inner characters of the group. Depending
on the validity of the majority number of characters finally validate the possibility of
the group as a number plate.
The advancement residing in the above method of localization is further discussed
hereunder:
1. Real-time detection of multiple type of traffic enforcement violation in
a single unified architecture.
2. Novel Number Plate localization Algorithm to localize appearance of a
number plate in any part of the video.
3. Filters out other textual and alphanumeric type information from the
video using a unique signature representing Number Plate regions
4. Novel Number Plate localization Algorithm to localize appearance of
multiple number plates in different parts of the image for multiple
vehicles at a time.
5. Effective with English Alapha-numerical characters independent of the
font, size, style, and color of the characters.
6. A general localization technique without particularly forcing
requirement to use any reflective quoting in the license plate
7. Completely detected by image processing techniques in software. Does
not require any specialized camera particularly build for number plate
recognition.
8. The technique works with any off-the-self security camera - analog
and IP
9. On-line and off-line processing
10. Independent of the speed of the vehicle
11. Lighting condition independent - Works in Day and light condition with
sufficient illumination of any type of light (neon, fluorescent, IR, etc.)
12. Does not depend upon color characteristics of the image or video
13. Low foot print computational and memory requirement for real-time
implementation and embedded processing.
14. OCR algorithm independent - The localized number plate region can be
processed by any OCR device or algorithm.
15. Automatic skew detection and correction
16. Processing of the type of vehicle, color of vehicle, logo, make of
vehicle, silhouette of the vehicle, possible driver snapshot, all can be
processed in real time.
An illustrative top level system overview for such traffic surveillance system is shown
in accompanying figure 10.
The proposed system thus comprises of two main modules viz. Video Surveillance
System and Intelligent Video Analytical Application for event detection. The Video
Surveillance System facilitates monitoring using security cameras in traffic junctions.
The videos feeds can be displayed in the control room for monitoring. The video feeds
are continuously and automatically recorded, indexed, and properly archived in
databases. The time of recording is configurable at administrator level. It is typically
configured inline with the operation shift / day shift. The Video Analytics Application
supports various functions as shown in the figure below. Each function consists of
various use cases of incident detection and management. The video Analytical
Process, flows in a sequence starting from Configuration - Incident Detection -
Incident Audit - Reporting - Synchronization - User Management.
Figure 11 illustrates a schematic diagram of the various features in such traffic
surveillance system of the invention.
Figure 12 is a detailed breakdown illustration of the video analytics application for the
purposes of traffic surveillance and violation detection and registration and follow-up
actions.
Advantageously, the system and method of traffic surveillance and violation detection
and action is adapted to facilitate configuring the parameters for incident detection
and management in following manner.
Camera configuration: Add cameras to the configuration server with a high resolution
image for detailed information. Start applicable application with event configuration.
Virtual Loop: For each camera in the junction/ free way, a zone which is to be
monitored is defined using this parameter. This is configured before starting the
system operations and only once. However the rights of modification are available for
administrator user level. The camera is always focused on the zone and it keeps on
capturing the videos of the "marked" zone. The zone is marked so as to capture the
maximum of the traffic in one direction. For each camera a zone is defined
separately. A typical configuration is shown in figure 13
Time Limit: The application facilitates defining the working hours and / or nonworking
hours for the purpose of recording the videos. The rights of modification in these time
limits are available at administrator level. The system captures and records all the
videos from the junction / free way cameras during working hours. It captures all the
videos and archives the offences detected during non - working hours.
Traffic Direction: To detect the vehicle(s) moving in the wrong direction, the
application facilitates defining the regular traffic moving direction for each camera
with minimum 10 FPS rate.
Speed Limit: To detect the over speeding vehicles crossing the zone, the application
facilitates defining maximum allowable speed limit for the vehicles. An incident is
generated on detecting the vehicle crossing the speed limits (not clubbed with Red
light camera).
Sensitivity & Duration: To detect the traffic congestion or vehicle presence crossing
the zone (virtual loop), the application facilitates defining maximum allowable vehicle
in percentage and the duration (time) for which it should not considered as traffic
congestion or vehicle present in a zone (not clubbed with Red light violation detection
or speed violation detection camera).
Incident Detection: Each junction has junction cameras for capturing the junction
videos lane wise and an I/O module monitoring the status of traffic signal. The videos
from junction cameras and status of traffic signal are sent to the control room via a
dedicated link. The analytical application in the control room monitors the change in
status of the traffic signal. On detecting the change, it starts analyzing appropriate
video and check for an offence happening in the junction. The scenario is explained
below. The figure below shows a typical layout of a 4 way junction. The system can
operate multiple lane / road which had red signal.A junction layout is shown in figure
14.
Recording: When the system operation starts, the junction cameras start capturing
the video feeds. These videos are saved in the server with unique serial number i.e.
video ID. The serial number is generated using junction ID, camera ID, Date & Time
and sequence number. Example: A video coming from junction 1, camera installed in
south direction on 22 Mar 2011 from 10:00 a.m. will have a serial number as
J01CS20110810600000025 as an example. This is interpreted as
301 - Junction with ID number as 01
CS - camera installed in ttS" direction
2011 - running year i.e. 2011
081 - 81st day of running year i.e. 22nd March
0600 - Time of day in minutes i.e. 10:00 am
000025 - Sequence number
The next consecutive video starting from 10:06 am on the same day will have the
video ID as J01CS20110810606000025 as an example. However the format is
customizable as required.
An illustrative manner of video recording is shown in figure 15.
The recording module is adapted to also display message in case any error is found
while playing the video or receiving the video from the camera. The connectivity
error is also detected and displayed on the screen and stored in the database.
Trigger: The application monitors the status of traffic lights continuously. As the
traffic light status is changed, the same is reported to the control room. Figure 16
illustrates a transition traffic light status.
Incident Detection: On receiving a trigger from I/O Module, the application starts
analyzing the videos. For e.g. When TN is Green, the traffic moves from S - N, S - E
or S - W. The traffic in other direction is standstill as the traffic signal is Red. The
application checks for following events to detect incidents
Vehicles violating Traffic signals
Traffic Congestion.
Vehicle crossing defined speed limits
Traffic presence (Vehicle density).
On detecting any one of the phenomena the application raises an alarm and an
incident is generated. The analysis process as activated as shown below.
Incident Display: Once the incident (alerts and notifications) is detected, an alarm
with visual along with sound effects is generated at operator's workstation or hand
held device. The alerts and notifications are recorded and stored in the operator's
inbox. The alert is generated when an incident is detected and a notification is
generated after detecting the alert. The notification gives details of the incident. It
consists of incident type, date and time of incident, junction name i.e. location of
incident, camera IP, and a link to the incident image / video for verification. The
notification is shown on the screen and it is flashed continuously till it is
acknowledged by the operator. The operator can accept or deny the notification by
verifying the video. On denying the alert / notification it is archived and can be
reviewed later.
License Plate Recognition: To register an incident the application request the NPR -
Engine to extracts the license plate number (Text) of the violating vehicles.
Figure 17 illustrates an exemplary illustration of capture number plate.
Incident Audit
Incident audit is ensures correct enforcement by verifying the incidents and vehicle
numbers. The application keeps on raising the alarms for incidents. The operator is
sitting in the control room or via handheld device audits these incidents by verifying
with the video / images. The audit is carried out in following sequence:
The operator selects an incident by applying suitable filters if this is an archived
incident. For a live incident he double clicks on the record to view the details.
The system shows details of the incident, a link to incident video and a link to license
plate image of the vehicle.
The operator verifies the incident by playing the video and vehicle's registration
number by viewing the license plate image.
If the license plate number is incorrect the operator enters the correct vehicle
number of the incident image.
Incident status is changed from "Pending" / "Acknowledged" to "Audit" and it is
saved into the database.
The operator enters the remark about the action taken while auditing the incident.
The remark is saved in the database for future reference.
Before saving the changes the operator is warned for re-verification of his inputs. He
previews the video and the license plate number and saves the audited transaction
in the database.
Figure 18 is an illustration of an incident audit view generated by the system of the
invention.
Reports
The above The traffic surveillance system application in accordance with the
invention further facilitates generating various reports including as below:
Incident Details Report: The report shows details of all incidents occurred during
selected time slot, for selected junction. The report portrait various details about the
incidents including junction name, type of incident, offence vehicle, date & time of
occurrence etc. The report can also be generated on hourly, daily, weekly and
monthly basis.
Incident Summary Report: The report shows incident count for selected time and
junction. The count is provided for each type of incident. The report can also be
generated on hourly, daily, weekly and monthly basis.
Offence Report: The report shows the details of a particular incident, with license
plate image. The report is generated by providing vehicle number, date and time
details and junction name.
External Application Interface
Synchronization with Handheld device application:
The analytical software stores the data into the database and provides access to the
external application (such as Mobile application) to pull the required data. By
facilitating this, the Mobile application checks the duplication of records and avoids
the same.
Administrative Functions
User Creation and Management: The access to the application is restricted using user
name and password for each system user. The user names and information is
registered into the system and each registered user is provided with a unique user
name and password. The users are created under defined categories such as
operators, supervisors, administrator etc. Access levels for each user category are
pre-defined. These are also customizable as per the requirements. While starting the
system operations user logs into the system and all the Operators that he has
performed are logged with his login name.
Privilege Assignment: Customization of access level is done using this functionality.
An administrator can modify the privileges assigned for a particular user category.
Master Data management: This includes entering the data into the system that
defines the system Boundaries. Example: junction details, number of camera per
junction etc.
We Claim:
1. An intelligent automated traffic enforcement system comprising:
a video surveillance system adapted to localize one or more number plates /
License Plates of vehicles stationary or in motion in the field of view of atleast
one camera without requiring to fix the number plate in a fixed location of the
car, the license plate can be reflective or non-reflective, independent of font
and language, and using normal security camera, and filtering out other texts
from the field of view not related to the number-plate, enabling to process the
localized number plate region with any Optical Character Recognition, and
generate localized information of the number plate with or without in other
relevant composite information of car (type, possible driver snapshot, shape
and contour of the vehicle) in parallel to monitor traffic and an intelligent
video analytical application for event detection based on the video feeds
2. An intelligent traffic enforcement system as claimed in claim 1 wherein the
process depends localizes possible license plate in the field of view of the
camera by (a) analysing statistically correlation and relative contrast between
the number plate content region and the background region surrounding this
content, (b) unique signature of number plate content based on pixel intensity
and vertical and horizontal distribution, (c) colour features of the content and
surrounded background.
3. An intelligent automated traffic enforcement system as claimed in anyone of
preceding claim wherein said video analytic process is carried out in the
sequence involving (a) configuration means ( b) incident detection means (c
) incident audit means (d ) reporting generation means (e) synchronization
means and (f) user management means.
4. An intelligent automated traffic enforcement system as claimed in anyone of
preceding claim wherein said configuration means adapted to configure
parameters for incident detection and management comprises (i) camera
configuration means (ii) means for providing for virtual loops in regions
where monitoring is required(iii) means for setting time limits for the
monitoring activity (iv) means providing feed indicative of regular traffic
moving directions for each camera (v) means providing for setting speed
limits to detect over speeding vehicles (vi) means for setting the sensitivity
and duration determining traffic abnormality and congestion.
5. An intelligent automated traffic enforcement system as claimed in anyone of
preceding claim wherein said incident detection means is adapted to detect
deviations from set parameters, analyze appropriate video feed and check for
offence involving (a) recording by way of saving video feeds from various
traffic locations of interest (b) generating alarm including alerts and/or
notifications visual and/or sound based on any incident detection involving
traffic violation and (c ) registering the incident against the extracted
corresponding license plate number of the violating vehicle.
6. An intelligent automated traffic enforcement system as claimed in anyone of
preceding claim wherein said incident audit means comprises :
Filter means adapted to reach to the incident if incident is an archived incident
and in case of live incident means for viewing the details;
Means for generating details of the incident, a link to incident video and a link
to license plate image of the vehicle;
Means for verification of the incident by playing the video and vehicle's
registration number by viewing the license plate image and If the license plate
number is incorrect means to enter the correct vehicle number of the incident
image;
Means for updating incident status changed from "Pending"/"Acknowledged"
to "Audit" and saving into the database.
Means to enter remark about the action taken while auditing the incident and
finally the remark is saved in the database with possible re- verification for
future reference.
7. An intelligent automated traffic enforcement system as claimed in anyone of
preceding claim wherein said incident reporting means comprises means for
automatized generation of incident detail reports and incident summary
report and generation of offence report.
8. An intelligent automated traffic enforcement system as claimed in anyone of
preceding claim wherein said synchronization means includes means adapted
for synchronization with handheld device applications.
9. An intelligent automated traffic enforcement system as claimed in anyone of
preceding claim wherein said user management means includes interface for
administrative functions including (a) user creation and management (b)
privilege assignment and (c) master data Management.
ABSTRACT
An intelligent automated traffic enforcement system is disclosed, adapted to assist
the traffic management department to identify the violation by traffic department
personnel by remotely observing the video feeds coming to the control room from the
junction through computer monitor. Alternately, the traffic violation can be
automatically detected by and automatically alert traffic personnel without physically
being present at the traffic junction or sitting in the control room. Advantageously,
the system of the invention does not require any specialized or proprietary camera
to detect these violations and is adapted to analyzes video feed from traditional
security cameras in a computer to detect the events. The system is adapted such that
it can capture and process the video on traffic movements to detect the violating
vehicles, automatically find the identity of the vehicle such as Number Plate, shape,
size, color, logo, type of the vehicle, and possibly the snapshot of the driver if visible.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 259-Kol-2012-(12-03-2012)SPECIFICATION.pdf | 2012-03-12 |
| 1 | 259-KOL-2012-RELEVANT DOCUMENTS [24-07-2023(online)].pdf | 2023-07-24 |
| 2 | 259-Kol-2012-(12-03-2012)FORM-3.pdf | 2012-03-12 |
| 2 | 259-KOL-2012-RELEVANT DOCUMENTS [30-09-2022(online)].pdf | 2022-09-30 |
| 3 | 259-KOL-2012-RELEVANT DOCUMENTS [25-09-2021(online)].pdf | 2021-09-25 |
| 3 | 259-Kol-2012-(12-03-2012)FORM-2.pdf | 2012-03-12 |
| 4 | 259-KOL-2012-IntimationOfGrant17-03-2020.pdf | 2020-03-17 |
| 4 | 259-Kol-2012-(12-03-2012)FORM-1.pdf | 2012-03-12 |
| 5 | 259-KOL-2012-PatentCertificate17-03-2020.pdf | 2020-03-17 |
| 5 | 259-Kol-2012-(12-03-2012)DRAWINGS.pdf | 2012-03-12 |
| 6 | 259-KOL-2012-PETITION UNDER RULE 137 [16-03-2020(online)].pdf | 2020-03-16 |
| 6 | 259-Kol-2012-(12-03-2012)DESCRIPTION (COMPLETE).pdf | 2012-03-12 |
| 7 | 259-KOL-2012-Written submissions and relevant documents [16-03-2020(online)].pdf | 2020-03-16 |
| 7 | 259-Kol-2012-(12-03-2012)CORRESPONDENCE.pdf | 2012-03-12 |
| 8 | 259-KOL-2012-FORM-26 [28-02-2020(online)].pdf | 2020-02-28 |
| 8 | 259-Kol-2012-(12-03-2012)CLAIMS.pdf | 2012-03-12 |
| 9 | 259-Kol-2012-(12-03-2012)ABSTRACT.pdf | 2012-03-12 |
| 9 | 259-KOL-2012-Correspondence to notify the Controller [27-02-2020(online)].pdf | 2020-02-27 |
| 10 | 259-KOL-2012-(30-03-2012)-FORM-1.pdf | 2012-03-30 |
| 10 | 259-KOL-2012-HearingNoticeLetter-(DateOfHearing-02-03-2020).pdf | 2020-02-20 |
| 11 | 259-KOL-2012-(30-03-2012)-CORRESPONDENCE.pdf | 2012-03-30 |
| 11 | 259-KOL-2012-ABSTRACT [01-03-2019(online)].pdf | 2019-03-01 |
| 12 | 259-KOL-2012-(09-04-2012)-PA.pdf | 2012-04-09 |
| 12 | 259-KOL-2012-CLAIMS [01-03-2019(online)].pdf | 2019-03-01 |
| 13 | 259-KOL-2012-(09-04-2012)-CORRESPONDENCE.pdf | 2012-04-09 |
| 13 | 259-KOL-2012-COMPLETE SPECIFICATION [01-03-2019(online)].pdf | 2019-03-01 |
| 14 | 259-KOL-2012-FER_SER_REPLY [01-03-2019(online)].pdf | 2019-03-01 |
| 14 | 259-KOL-2012-FORM-18.pdf | 2012-09-04 |
| 15 | 259-KOL-2012-FER.pdf | 2018-09-11 |
| 15 | 259-KOL-2012-OTHERS [01-03-2019(online)].pdf | 2019-03-01 |
| 16 | 259-KOL-2012-FER.pdf | 2018-09-11 |
| 16 | 259-KOL-2012-OTHERS [01-03-2019(online)].pdf | 2019-03-01 |
| 17 | 259-KOL-2012-FORM-18.pdf | 2012-09-04 |
| 17 | 259-KOL-2012-FER_SER_REPLY [01-03-2019(online)].pdf | 2019-03-01 |
| 18 | 259-KOL-2012-(09-04-2012)-CORRESPONDENCE.pdf | 2012-04-09 |
| 18 | 259-KOL-2012-COMPLETE SPECIFICATION [01-03-2019(online)].pdf | 2019-03-01 |
| 19 | 259-KOL-2012-(09-04-2012)-PA.pdf | 2012-04-09 |
| 19 | 259-KOL-2012-CLAIMS [01-03-2019(online)].pdf | 2019-03-01 |
| 20 | 259-KOL-2012-(30-03-2012)-CORRESPONDENCE.pdf | 2012-03-30 |
| 20 | 259-KOL-2012-ABSTRACT [01-03-2019(online)].pdf | 2019-03-01 |
| 21 | 259-KOL-2012-(30-03-2012)-FORM-1.pdf | 2012-03-30 |
| 21 | 259-KOL-2012-HearingNoticeLetter-(DateOfHearing-02-03-2020).pdf | 2020-02-20 |
| 22 | 259-Kol-2012-(12-03-2012)ABSTRACT.pdf | 2012-03-12 |
| 22 | 259-KOL-2012-Correspondence to notify the Controller [27-02-2020(online)].pdf | 2020-02-27 |
| 23 | 259-Kol-2012-(12-03-2012)CLAIMS.pdf | 2012-03-12 |
| 23 | 259-KOL-2012-FORM-26 [28-02-2020(online)].pdf | 2020-02-28 |
| 24 | 259-KOL-2012-Written submissions and relevant documents [16-03-2020(online)].pdf | 2020-03-16 |
| 24 | 259-Kol-2012-(12-03-2012)CORRESPONDENCE.pdf | 2012-03-12 |
| 25 | 259-KOL-2012-PETITION UNDER RULE 137 [16-03-2020(online)].pdf | 2020-03-16 |
| 25 | 259-Kol-2012-(12-03-2012)DESCRIPTION (COMPLETE).pdf | 2012-03-12 |
| 26 | 259-KOL-2012-PatentCertificate17-03-2020.pdf | 2020-03-17 |
| 26 | 259-Kol-2012-(12-03-2012)DRAWINGS.pdf | 2012-03-12 |
| 27 | 259-KOL-2012-IntimationOfGrant17-03-2020.pdf | 2020-03-17 |
| 27 | 259-Kol-2012-(12-03-2012)FORM-1.pdf | 2012-03-12 |
| 28 | 259-KOL-2012-RELEVANT DOCUMENTS [25-09-2021(online)].pdf | 2021-09-25 |
| 28 | 259-Kol-2012-(12-03-2012)FORM-2.pdf | 2012-03-12 |
| 29 | 259-KOL-2012-RELEVANT DOCUMENTS [30-09-2022(online)].pdf | 2022-09-30 |
| 29 | 259-Kol-2012-(12-03-2012)FORM-3.pdf | 2012-03-12 |
| 30 | 259-KOL-2012-RELEVANT DOCUMENTS [24-07-2023(online)].pdf | 2023-07-24 |
| 30 | 259-Kol-2012-(12-03-2012)SPECIFICATION.pdf | 2012-03-12 |
| 1 | search(74)_11-09-2018.pdf |