Sign In to Follow Application
View All Documents & Correspondence

A Camera View Tampering Detection System.

Abstract: A camera view tampering detection system in order to detect tampering events of camera using video stream that said camera captures, said system comprising: a frame decoding mechanism; a contrast detection mechanism; a frame blurring detection mechanism; a down sampling mechanism; an illumination noise removal mechanism; a background modeling mechanism; a foreground extraction mechanism; a difference and thresholding mechanism; an illumination noise removal mechanism; a color coherence estimation mechanism; and a foreground density at background estimation; characterized, in that, said system further comprising: at least a connected foreground density estimation mechanism in order to eliminate false positives detected by said color coherence estimation mechanism and said foreground density at frame boundary estimation mechanism, said connected foreground density estimation mechanism comprising a filter bank mechanism in order to perform blob pattern analysis on frames and to generate a noise model; and at least an output generator mechanism in order to generate positive outputs in relation to camera view tampering identification based on a plurality of parameters as determined by each of said mechanisms.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
11 March 2014
Publication Number
39/2015
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

CROMPTON GREAVES LIMITED
CROMPTON GREAVES LIMITED, CG HOUSE, 6TH FLOOR, DR. ANNIE BESANT ROAD, WORLI, MUMBAI-400030, MAHARASHTRA, INDIA

Inventors

1. PIRATLA ADITYA
CROMPTON GREAVES LIMITED, CG GLOBAL R&D, BHASKARA BUILDING, KANJUR MARG (EAST) MUMBAI - 400042, MAHARASHTRA, INDIA.
2. SURENDRAN JAYALAKSHMI
CROMPTON GREAVES LIMITED, CG GLOBAL R&D, BHASKARA BUILDING, KANJUR MARG (EAST) MUMBAI - 400042, MAHARASHTRA, INDIA.
3. DAS MONOTOSH
CROMPTON GREAVES LIMITED, CG GLOBAL R&D, BHASKARA BUILDING, KANJUR MARG (EAST) MUMBAI - 400042, MAHARASHTRA, INDIA.

Specification

FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
As amended by the Patents (Amendment) Act, 2005
AND
The Patents Rules, 2003
As amended by the Patents (Amendment) Rules, 2006
COMPLETE SPECIFICATION
(See section 10 and rule 13)
TITLE OF THE INVENTION
A camera view tampering detection system.
APPLICANT(S):
Crompton Greaves Limited. CG House. 6th Floor. Dr. Annie Besant Road, Worli, Mumbai -400030, Maharashtra, India; an Indian Company
INVENTOR(S):
Piratla Aditya, Surendran Jayalakshmi and Das Monotosh of Crompton Greaves Limited, CG Global R&D, Bhaskara Building, Kanjur Marg (East), Mumbai - 400042, Maharashtra, India; all Indian Nationals.
PREAMBLE TO THE DESCRIPTION:
The following specification particularly describes the nature of this invention and the manner in which it is to be performed:

FIELD OF THE INVENTION:
This invention relates to the field of computational systems, image processing systems, and signal processing.
Particularly, this invention relates to a camera view tampering detection system.
BACKGROUND OF THE INVENTION:
Surveillance generally relates to the acts of observing or listening to persons, places, or activities; usually in a secretive or unobtrusive manner. Electronic surveillance specifically relates to the use of electronic aid or surveillance equipment to achieve surveillance objective. One such object of surveillance is a camera.
Typically, cameras are installed, in today's age, at a variety of locations - specifically for the purposes of enhancing security. The places may be houses, shops, building complexes, housing societies, offices, parking facilities, banks, malls, toll booths, places of interest, roads, and the like. Video surveillance is performed by conspicuous or hidden cameras that transmit and record visual images that may be watched simultaneously or reviewed later on tape.
Electronic surveillance serves several purposes: (1) enhancement of security for persons and property; (2) detection and prevention of criminal, wrongful, or impermissible activity; and (3) interception, protection, or appropriation of valuable, useful, scandalous, embarrassing, and discrediting information.
Electronic surveillance of areas and locations to detect changes in the background has variety of applications. The most basic among which is to notify when there is any unauthorized change in the preset view of a camera, which effectively renders the information from camera to be of no value. There could be a variety of ways by which an unauthorized change could be carried out:
1) Manually moving the camera.
2) Partial or complete covering the camera view.
3) Reducing the contrast levels of the scene.
Most systems of electronic surveillance are either isolated or unmonitored, or include a human operator monitoring various areas from a control room. Events of interest in any surveillance related applications may be rare. It is very difficult for a human operator to maintain focus for very long durations to accurately monitor these rare events. Even for post analysis of events, it is essential that the video stream from the camera is reliable. To ensure reliability of the camera stream, automation of electronic surveillance systems is a must.
Systems employing automation have a big challenge of extracting relevant information from a very large data. Automated surveillance systems are thus generally characterized by the number of false alarms that they

generate in detecting relevant information. In any given camera view these false alarms may be triggered by a host of unimportant events like a transient object in a particular scene pausing for short periods of time, a person or a small group of people moving within the scene, movement of tree leaves etc. These events do not sufficiently qualify for a change in camera view or background and hence should not generate any alarms.
Some of the identified camera tampers are as follows:
1. Rotation of Camera
2. Obstruction
3. Covering by Cloth
4. Scene Blurring
5. Low Contrast
Unattended camera devices are increasingly being used in various intelligent transportation systems for applications such as surveillance, toll collection, and photo enforcement. In these fielded systems, a variety of factors can cause camera obstructions and persistent view changes that may adversely affect their performance. Examples include: camera misalignment, intentional blockage resulting from vandalism, and natural elements causing obstruction, such as foliage growing into the scene and ice forming on the porthole. In addition, other persistent view changes resulting from new scene elements of interest being captured, such as stalled cars, suspicious packages, etc. might warrant alarms. Since these systems are often unattended, it is often important to automatically detect such incidents early.
Automated detection of such incidents can help operators monitoring these cameras manage a larger fleet of camera devices without accidentally missing key events.
Furthermore, there are at least two important classes of problems that affect such fixed camera systems are obstructions and persistent view changes from causes such as undesirable tilting of the camera. Obstructions can cause the scene of interest to be partially blocked or out of view. In addition, for fixed cameras, maintaining its orientation is important to ensure that the scene of interest is framed correctly. Often times, subtle unintentional tilting of the camera can cause certain scene elements of interest to go out of view. Such tilting from the nominal orientation can be caused by factors such as technicians cleaning the device-viewing porthole periodically, intentional vandalism, and accidental collisions with vehicles.
On some occasions, new scene elements of interest might appear and cause persistent view changes. Examples include stalled cars, suspicious packages, or an accident. These might not be problems affecting the camera function per se, but might be cause for alerting operators, particularly in surveillance applications. In order to implement automated checks for the occurrence of the above-mentioned problems and incidents, it is desirable to use algorithms that can operate without the need for the full reference image of the same scene without the problem/incident.

There is a need for a system which provides authenticated, fool-proof, camera view tampering detection without false positives.
PRIOR ART:
US20130051613 discloses a method for using region-level adaptive background model for tracking human and object, which involves relating number of static pixels in foreground region to track object bounding box over total number of pixels of foreground region.
The prior art document WO2012081969 discloses a system for detecting intrusion event in region of interest. The system has blob segmentation subcomponent which processes current image and extracts motion pixels by subtracting image from reference or background image.
US7227893 relates to video detection apparatus for e.g. intruder tracking, has object tracking module tracking location within current frame of segment, determining segment's projected location in subsequent frame, and providing location in frame. This patent talks more on improving blob segmentation rather than on global blob distribution analysis.
US20130243254 discloses a method for performing foreground analysis for detecting e.g. car in video scene, involves performing foreground analysis using tracking information and pruning false alarms to determine whether foreground object is abandoned or removed. A blob in general is a cluster of data. Foreground blob is obtained by subtracting background from current frame. The patent here is on finding a blob which remains static at a place. So a temporal analysis might be performed on the detected block just to determine whether a particular blob is present continuously which is needed in static object detection.
US20050036658 discloses a method for detecting changes in the background of image sequences for electronic surveillance. Method consists in capturing images of the scene, generating two probabilistic measures of the scene to indicate the relatively static background and relatively dynamic transient objects entering and leaving the scene and detecting whether the second measure deviates from the first to indicate that an object entering a scene has not left it The probabilistic measure comprises a mode of a probability density function of pixel values and relative time periods. Regions of motion are removed from further consideration and a check is made as to whether detected objects satisfy a predetermined object recognition criterion. However, it uses histogram method for detecting the scene change. Histogram is cumulative in nature. So multiple disjoint motions (which do not corrupt background) will cause a Histogram change and hence an alarm will be raised. However, relying on Histogram is not accurate for eliminating false positives.

There is need for a system and method which focuses on global blob pattern analysis. In view of the above observations, a need clearly exists for image analysis techniques that at least address deficiencies of visual motion surveillance systems.
OBJECTS OF THE INVENTION:
An object of the invention is to detect camera tampering events.
Another object of the invention is to remove false positives in cases of generic motion.
Yet another object of the invention is to detect camera tampering events based on color coherence vector.
Still another object of the invention is to detect camera tampering events based on background frame generation.
An additional object of the invention is to detect camera tampering events based on foreground frame generation.
Yet an additional object of the invention is to detect camera tampering events based on frame boundary.
SUMMARY OF THE INVENTION:
According to this invention, there is provided a camera view tampering detection system in order to detect tampering events of camera using video stream that said camera captures, said system comprises: i. at least a frame decoding mechanism in order to decode frames from said video stream; ii. at least a contrast detection mechanism in order to detect low contrast in said decoded frames; iii. at least a frame blurring detection mechanism in order to detect blurring of each of said decoded frames iv. at least a down sampling mechanism in order to down sample said decoded frames; v. at least an illumination noise removal mechanism in order to remove changes or noise caused by a
change in lighting conditions, to obtain at least a current frame; vi. at least a background modeling mechanism in order to extract a background frame or model from a
continuous stream of decoded frames; vii. at least a foreground extraction mechanism in order to extract foreground from said identified
background frame; viii. at least a difference and thresholding mechanism in order to compute the difference of said background frame and said current frame and thresholding said difference frame; ix. at least an illumination noise removal mechanism in order to remove noise and obtaining a de-noised current frame after removal of noise;

x. at least a color coherence estimation mechanism in order to classify colors in a frame into coherent
colors and incoherent colors; and xi. at least a foreground density at frame boundary estimation mechanism in order to determine changes at edge(s) in frames in order to determine changes in frame boundary by computing .normalized area of blobs in said foreground frame at a foreground frame boundary; xii. characterized, in that, said system further comprises:
xiii. at least a connected foreground density estimation mechanism in order to eliminate false positives
detected by said color coherence estimation mechanism and said foreground density at frame boundary
estimation mechanism, said connected foreground density estimation mechanism comprising a filter
bank mechanism in order to perform blob pattern analysis on frames; and
xiv. at least an output generator mechanism in order to generate positive outputs in relation to camera view
tampering identification based on a plurality of parameters as determined by each of said mechanisms. Typically, said contrast detection mechanism is to detect a low contrast in said decoded frames, said mechanism adapted to activate a contrast alarm generation mechanism upon detection of low contrast.
Typically, said system comprises a blurring alarm generation mechanism in order to generate a blurring alarm upon detection of blurring by said frame blurring detection mechanism.
Preferably, said frame blurring detection mechanism is a Markov Random fields based frame blurring detection mechanism.
Preferably, said illumination noise removal mechanism is a Retinex model based illumination noise removal mechanism.
Typically, said background modeling mechanism is an exponential updating background modeling mechanism, said exponential updating technique being performed only on those background frames which do not show any motion when analyzed by a block based motion-vector estimation technique.
Typically, said background modeling mechanism comprises a training mechanism wherein a preset number of frames are used for training in order to confirm a background as a reference image.
Typically, said color coherence estimation mechanism is a color coherence vector technique based color coherence estimation mechanism adapted to operate on said current frame and said background frame, wherein pixels from said current frame and said background frame are classified as being coherent or incoherent based on their grey level and neighborhood pixel connectivity.

Typically, said color coherence estimation mechanism is a color coherence vector technique wherein in case of a scene change, if the color profile throughout the frame has a pre-determined bias, then an alarm generation mechanism determines a tampering event such as specifying partial or complete obstruction.
Typically, said filter bank mechanism is a global blob pattern analysis filter bank mechanism.
Typically, said system comprises a weight assignment mechanism, with user configurable weights, in order to assign weights to outputs of said color coherence estimation mechanism, said foreground density at background estimation mechanism, and said connected foreground density estimation mechanism in order to take a weighted average of outputs from said color coherence estimation mechanism, said foreground density at background estimation mechanism, and said connected foreground density estimation mechanism, correspondingly.
Typically, said filter bank mechanism comprising at least a foreground frame receiving mechanism and wherein stage of operation is initialized to a first stage.
Typically, said filter bank mechanism comprises:
• at least a first blob area calculator mechanism wherein said foreground frame is provided to said blob area calculator mechanism in order to compute the total area of blobs in said foreground frame; and
• at least a soft thresholding mechanism wherein output of said first blob area calculator mechanism is subjected to said soft thresholding mechanism to generate an output Tl and wherein said soft thresholding mechanism takes the total blob area and checks it for at least the following three conditions:

(a) if the output of said soft thresholding mechanism is less than a pre-defined minimum area limit, then the soft thresholding gives an output of 0;
(b) if the output of said soft thresholding mechanism is greater than the pre-defined minimum area but less than a pre-defined maximum area limit, then the output is a ramp with a pre-defined slope which is dependent on the scaling required on the blob area output. Value of Tl is the output;
(c) if the output of said soft thresholding mechanism is greater than the pre-defined maximum area limit, then the output from the soft thresholding block is 1;
Wherein cases a) and c) the system exits with output equal to Tl.
Typically, said filter bank mechanism comprises:
• at least a first determination mechanism in order to determine if stage is a first stage and also if output is equal toTl; and
• at least a frame divider mechanism in order to divide the frame into two equal halves, spatially when output of said first determination mechanism is positive.

Typically, said filter bank mechanism comprises at least a second determination mechanism in order to determine if stage is over by a stage-over determination mechanism.
Typically, said filter bank mechanism comprises:
A. at least a first determination mechanism in order to determine if stage is a first stage and also if output is
equal to Tl;
B. at least a second determination mechanism in order to determine if stage is over by a stage-over
determination mechanism;
C. characterized, in that, said filter bank mechanism comprising:
D. at least a third determination mechanism in order to determine truth value of said first determination
mechanism and said stage-over determination mechanism such that:
E. if output of said first determination mechanism is true, then said filer bank mechanism takes said
foreground frame as data input; or
F. if output of said second determination mechanism is true, then said filer bank mechanism takes data
from already divided frames.
Typically, said filter bank mechanism comprises at least a stage increment mechanism adapted to increment stage after division of frames.
Typically, said filter bank mechanism comprises at least a first blob area calculator mechanism in order to calculate blob area for each half of two equally divided said foreground frame, in a first loop, in order to find out whether the foreground is equally distributed throughout said foreground frame or whether it is concentrated in a few partitions, since being concentrated in only a few partitions is a sign of generic motion.
Typically, said filter bank mechanism comprises at least a second blob area calculator mechanism in order to calculate blob area for further equally divided blocks, of each of earlier divided blocks, independently, in subsequent loops, in order to find out whether the foreground is equally distributed throughout said foreground frame or whether it is concentrated in a few partitions, since being concentrated in only a few partitions is a sign of generic motion.
Typically, said filter bank mechanism comprising at least a distribution match filter mechanism in order to compare blob areas of divided frames in order to find out distribution of blobs in divided frames, wherein, if the distribution match criteria is ratio output of 1 signifies the best equal distribution while, and if the distribution match critera is difference, then 0 signifies the best equal distribution.
Typically, said filter bank mechanism comprises at least a distribution match filter mechanism co-operating with a weight assignment mechanism in order to give different weights for each stage such that:

(a) as the size of data becomes smaller and smaller, there is an increased risk of data being corrupted by noise; and
(b) there might be more blocks without any data when the division becomes smaller - these blocks cannot be given weights equal to their parent blocks.
Typically, said filter bank mechanism comprises:
I. at least an accumulator mechanism in order to receive output(s) from a distribution match filter
mechanism till stage becomes equal to the minimum number of stages specified in said system; II. at least a percentage computation mechanism in order to turn output of said accumulator mechanism into a percentage with respect to minimum number of stages;
III. at least a soft thresholding mechanism wherein output of said first blob area calculator mechanism is subjected to said soft thresholding mechanism to generate an output Tl;
IV. at least a multiplier mechanism in order to multiple output of said percentage computation mechanism with output Tl of soft thresholding mechanism to obtain a final output.;
V. at least a mean and variance computation mechanism in order to compute mean and variance of output of said multiplier mechanism over a fixed number of non-motion frames;
VI. at least a difference computation mechanism in order to compute a difference between mean from said mean and variance computation mechanism from the output for current frame to form output difference, iteratively;
VII. at least a first comparator mechanism in order to compare if said computed difference falls within the variance of mean blob area (as computed by said mean and variance computation mechanism) or if it is negative and setting value to zero, if either is true or analysing difference for different cases if false; and
VIII. at least a second comparator mechanism in order to compare if the output difference is less than a preset minimum threshold and if the blob filter output is also less than a preset threshold and in'order to confirm motion in scene and if output difference is less and if blob filter output is high in order to confirm likelihood of scene change.
Typically, said filter bank mechanism comprises at least a noise model computation mechanism in order to compute mean and variance output to form a noise model, said noise model being a reference for further analyses for use by said system.
Typically, said filter bank mechanism comprises an updation mechanism in order to continuously update said noise model over time, said noise model being obtained by an initialization phase in order to account for gradual illumination change, said initialization phase being restricted to only those frames where a high foreground is not detected.
Typically, said filter bank mechanism comprises at least an impulse noise effects' removal mechanism in order to aid removal of false positives due to impulse noises.

Typically, said filter bank mechanism comprises at least a mean change computation mechanism in order to compute mean change of blob filter output difference.
Typically, said filter bank mechanism comprises at least a plotting mechanism in order to plot blob filter output difference over time and in order to compute mean change.
Typically, said filter bank mechanism comprises at least an impulse noise effects' removal mechanism in order to aid removal of false positives due to impulse noises characterized in that said impulse noise effects' removal mechanism comprising at least a gradient determination mechanism in order to determine a gradient which crosses a preset gradient threshold.
Typically, said filter bank mechanism comprises at least an impulse noise effects' removal mechanism in order to aid removal of false positives due to impulse noises characterized in that said impulse noise effects' removal mechanism comprising at least a plot division mechanism in order to divide the plot into two parts: 1) before the time of determined high gradient; and 2) after the time of determined high gradient.
Typically, said filter bank mechanism comprises at least an impulse noise effects' removal mechanism in order to aid removal of false positives due to impulse noises characterized in that said impulse noise effects' removal mechanism comprising at least an additional mean computation mechanism in order to compute mean, separately, for the two divided parts of the plot division mechanism.
Typically, said filter bank mechanism comprises at least an impulse noise effects' removal mechanism in order to aid removal of false positives due to impulse noises characterized in that said impulse noise effects' removal mechanism comprising at least a mean change determination mechanism in order to determine if the mean change for the two divided plots is less than a preset threshold in order to suppress output from blob filter, if true.
Typically, said filter bank mechanism comprising a value determination mechanism wherein a positive value from said filter bank mechanism is indicative of camera tampering and a negative value from said filter bank mechanism is indicative of motion.
Typically, said system comprises a noise model computation mechanism in order to build a noise model and further comprising a color coherence vector output post-processing mechanism in order to calculate color coherence vector match measure for each and every decoded frame and to further compute a positive output of camera tampering in case of a change or compute a zero output (ideal) when there is no change.

Typically, said system comprising a frame boundary definition mechanism in order to assess if its foreground, area is below or above a predefined threshold as defined by a user in order to provide a negative or positive corresponding output respectively to the output generator of camera tampering.
Typically, said output generator mechanism comprises an alarm generation mechanism in order to generate an alarm upon detection of an event of camera tampering.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS:
The invention will now be described in relation to the accompanying drawings, in which:
Figure 1 show's the entire flow of the method followed by the system, of this invention, which is a camera view tampering detection system; and
Figure 2 illustrates a flow of the method followed by a filter bank mechanism of the system of this invention.
DETAILED DESCRIPTION OF THE ACCOMPANYING DRAWINGS:
According to this invention, there is provided a camera view tampering detection system.
Figure 1 shows the entire flow of the method followed by the system, of this invention, which is a camera view tampering detection system.
In accordance with an embodiment of this invention, there is provided a frame decoding mechanism in order to decode frames from a video stream being received by a camera. These decoded frames are further used by the system and method of this invention. This is represented by step 101.
In accordance with another embodiment of this invention, there is provided a contrast detection mechanism in order to detect low contrast. Typically, this contrast detection mechanism is set to low contrast detection. This is represented by step 102. The contrast detection mechanism requires a full resolution frame. If low contrast is detected in decoded frames, then the system activates a contrast alarm generation mechanism in order to generate a low contrast alarm and the other mechanisms of this invention do not get activated. In other words, the other mechanisms of this invention get bypassed. If there is no low contrast detected by the contrast detection mechanism, then the system and method of this invention proceeds towards further mechanisms and steps of this invention. In at least one embodiment, state of the art contrast detection mechanisms maybe employed.
In accordance with yet another embodiment of this invention, there is provided a frame blurring detection mechanism in order to detect blurring of frame (image). This is represented by step 103. The blurring detection mechanism requires a full resolution frame. If blurring is detected in decoded frames, then the system activates a

blurring alarm generation mechanism in order to generate a blurring alarm and the other mechanisms of this invention do not get activated. In other words, the other mechanisms of this invention get bypassed. If there is no blurring detected by the blurring detection mechanism, then the system and method of this invention proceeds towards further mechanisms and steps of this invention. In at least one embodiment, state of the art blurring detection mechanisms maybe employed. In at least one embodiment frame blur analysis is performed using Markov Random fields on frame edges.
In accordance with yet another embodiment of this invention, there is provided a down sampling mechanism in order to down sample decoded frames from a camera. This is represented by step 104. Further mechanisms and steps of this invention do not require full resolution frames. Down sampling of frames, which causes frame size to be reduced, at this step, provide obvious reduction in computational resources (and expense).
In accordance with still another embodiment of this invention, there is provided an illumination noise removal mechanism in order to remove changes or noise caused by a change in lighting conditions. This is represented by step 105. In at least one embodiment, the illumination noise removal mechanism employs state of the art noise removal mechanisms. In at least one embodiment, the illumination noise removal mechanism employs the Retinex model. This is a pre-processing mechanism and step.
In accordance with an additional embodiment of this invention, there is provided a background modeling mechanism in order to extract a background frame or model from a continuous stream of frames from the camera view system. A background model acts as a reference against which subsequent frames are compared to draw some decisions. The background is continuously updated using an exponential updating technique. In at least one embodiment, the exponential updating is performed only on frame blocks which do not show any motion when analyzed by a block based motion-vector estimation technique. This is represented by step 106. Typically, this comprises a training mechanism where a preset number of frames are used for training in order to confirm a background as a reference image.
In accordance with yet an additional embodiment of this invention, there is provided a foreground extraction mechanism in order to extract foreground from the identified background frame or model by a difference and thresholding mechanism. This is represented by step 107. The result of this computation is the foreground frame as represented by step 108.
Step 109 represents obtaining current frame after removal of noise by the illumination noise removal mechanism at step 105. Step 110 represents the background frame obtained from background model at step 106.
In accordance with another additional embodiment of this invention, there is provided at least a color coherence estimation mechanism in order to classify colors in a frame in to two parts - coherent colors and incoherent

colors. This is represented by step 111. In at least one embodiment, the color coherence estimation mechanism employs a color coherence vector technique. This mechanism operates on both current frame and the background frame. Pixels from both frames i.e. current frame and background frame are classified as being coherent or incoherent based on their grey level and neighborhood pixel connectivity. If a particular grey level is present and there are more than a specified number of pixels of same grey level in its neighborhood (8 neighborhood is common), then the pixel is considered a coherent pixel, otherwise it is considered an incoherent pixel. Based on the counts of coherent and incoherent pixel, two bins are formed for each grey level. The two bins together represent a color vector. Based on the color vector difference of current and background frame, a measure is calculated, which represents the difference between the two frames. In atjeast one embodiment state of the art color coherence estimation techniques are employed. This estimation is superior compared to a histogram method. Post processing of the match measure to remove impulse and environmental noise is described in a separate section titled post processing on color coherence vector output. In addition to the difference measure calculated using CCV, the system and method of this invention can use CCV on a single frame to understand the color distribution of a frame. In case of a scene change, if the color profile throughout the frame has some specific bias, then it is possible to generate an alarm specifying partial or complete obstruction.
In accordance with yet another additional embodiment of this invention, there is provided a foreground density at frame boundary estimation mechanism in order to determine changes at boundaries of frames. This is represented by step 112. In a series of consecutive frames, in a camera view detection system or a surveillance system, the foreground may change continuously, but the edge remains constant. Typically, frames are focused in a manner that features of interest are present well within the frame boundaries, while boundaries are characterized by relatively unimportant information. When there is a tampering (of camera) event, especially when the camera is physically turned, it is highly likely that there is a more predominant change at frame boundaries. So, the system and method of this invention calculates foreground area at frame boundary by computing the normalized area of blobs in the foreground frame at the foreground frame boundary. This is analogous to blob filter output in blob pattern analysis. The post processing for removal of noise is described in a separate section under the title of post processing on foreground density at boundary output.
In accordance with still another additional embodiment of this invention, there is provided a connected foreground density estimation mechanism in order to eliminate false positives detected by the color coherence estimation mechanism and the foreground density at frame boundary estimation mechanism. This is represented by step 113. Both the color coherence estimation mechanism and the foreground density at frame boundary estimation mechanism are ineffective in handling false positives due to generic motion. In general, a camera view may encounter motion due to movement of people, animals, vehicles, and the like, which may not be blocking significant amount of camera view to cause an event of alarm. This type of motion is referred as generic motion throughout this specification. To remove the false positives caused by generic motion, a novel

filter bank mechanism called as the global blob pattern analysis filter bank mechanism is used which is explained, herein, below.
In accordance with another embodiment of this invention, there is provided an output generator mechanism in order to generate positive outputs in relation to camera view tampering identification based on a plurality of parameters as determined by the aforementioned mechanisms. This is represented by step 114. In case of low contrast or scene blur detection, the output generator does not perform any further computation. It checks for low contrast first and if it is true, then it directly generates a low contrast alarm, otherwise if the scene blurred output is true, it generates a scene blurred alarm. In at least one embodiment, a weight assignment mechanism is used in order to assign weights to outputs of a) color coherence estimation mechanism, b) foreground density at background estimation mechanism, and c) connected foreground density estimation mechanism.
If both these conditions are false, then it takes a weighted average of the outputs from: a) color coherence estimation mechanism, b) foreground density at background estimation mechanism, and c) connected foreground density estimation mechanism; all mentioned above. The weights are user configurable to allow the user to customize the method according to the camera view. In a camera view which has excess of generic motion then the user should give more weight to the technique of connected foreground density estimation mechanism. Jn case of a view in which a large portion in the focus area is covered by a tree, the user should give more weight to color coherence estimation mechanism and foreground density at background estimation mechanism. If there is a expected movement at the boundaries of a frame, then user has a flexibility of either not considering that part of the boundary or if there is lot of expected movement at the entire boundary area then the user may give less weight to foreground density at background estimation mechanism.
Figure 2 illustrates a flow of the method followed by a filter bank mechanism of the system of this invention.
In accordance with yet another embodiment of this invention, there is provided a global blob pattern analysis Filter bank mechanism in order to generate a noise model.
The global blob pattern analysis filter bank mechanism works with the connected foreground density estimation mechanism (step 113 of Figure I). This mechanism works in two different phases: t) initializing phase; and 2) computation phase. In its computation phase, this mechanism performs all necessary computations required to make an analysis and then passes on to a stage for minimizing the effect of impulse noise before giving an input to the output generator mechanism (step 114 of Figure 1).
In its initialization phase, this mechanism performs the steps by the mechanisms as described below.
In at least one embodiment of the initialization phase, there is provided a global blob pattern analysis filter bank mechanism in order to perform blob pattern analysis on frames.

In at least an embodiment of the global blob pattern analysis filter bank mechanism, there is provided a foreground frame receiving mechanism. The foreground frame extracted at step 108 of Figure 1 is used. This step of receiving is represented by step 201. Stage is initialized to 1 at this instance (stage = 1).
In at least another embodiment of the global blob pattern analysis filter bank mechanism, there is provided a first blob area calculator mechanism. The foreground frame that has been received at step 201 is provided to the blob area calculator mechanism. The first blob area calculator mechanism computes the total area of blobs in the entire foreground frame that has been received. This is represented by step 202.
In at least yet another embodiment of the global blob pattern analysis filter bank mechanism, there is provided a soft thresholding mechanism. The output of the first blob area calculator mechanism of step 202 is subjected to the soft thresholding mechanism to generate an output Tl (reference numeral 216). This is represented by step 203. The soft thresholding mechanism takes the total blob area and checks it for at least the following three conditions:
(a) If the output of the soft thresholding mechanism is less than a pre-defined minimum area limit, then the soft thresholding gives an output of 0.
(b) If the output of the soft thresholding mechanism is greater than the pre-defined minimum area but less than a pre-defined maximum area limit, then the output is a ramp with a pre-defined slope which is dependent on the scaling required on the blob area output. Value of TI is the output.
(c) If the output of the soft thresholding mechanism is greater than the pre-defined maximum area limit, then the output from the soft thresholding block is 1.
In cases a) and c) the entire process exits with output equal to Tl.
The pre-defined area is a user defined parameter. This can be configured depending on the scene under surveillance. The value is dependent on how crowded the scene is.
In at least still another embodiment of the global blob pattern analysis filter bank mechanism, there is provided a first determination mechanism in order to determine if stage is 1 (as per step 201) and also is output Tl (as per step 203) is not equal to 1. This first determination is represented by step 204. If these conditions are satisfied, then the foreground frame, extracted at step 108 of Figure 1, is received at a frame divider mechanism in order to divide the frame into two equal halves, spatially. This step of frame division is represented by step 205. The divided frames are received at step 207.
In at least an additional embodiment of the global blob pattern analysis filter bank mechanism, there is provided a second determination mechanism in order to determine if stage is over by a stage-over determination mechanism. This is represented by step 206. This number, typically, is always greater than I.

In at least another additional mechanism of the global blob pattern analysis filter bank mechanism, there is provided a third determination mechanism in order to determine if output of step 203 is true. This is represented by step 208. If output of steps 204 and 206 are true, then the global blob pattern analysis filter bank mechanism takes foreground frame, extracted at step 108 of Figure 1, as data input. This is represented by step 205. If output of only step 206 is true and 204 is false, then the global blob pattern analysis filter bank mechanism takes data from already divided frames of step 205. The foreground frame is taken as data input in the first loop of the global blob pattern analysis filter bank mechanism.
In at least yet another additional mechanism of the global blob pattern analysis filter bank mechanism, there
is provided a stage increment mechanism adapted to increment stage after division of frames (as per step 207). This is represented by step 210. The loop proceeds towards step 206 in order to determine if it is the last stage, else steps are repeated. If it is the last stage, the steps' method proceeds towards step 213.
When stage is 1, the foreground frame, extracted at step 108 of Figure 1, is divided into two equal halves (as per step 205) and blob area is calculated for each half in a first blob area calculator mechanism. In subsequent loops, same operation is performed but on divided blocks by a second blob area calculator mechanism. This is represented by step 209. For example, the system and method of this invention already comprises two equal spatial partitions of the received foreground frame. Now, each of the two equal partitions is again divided into two equal halves and blob area is again calculated (as per steps 207 and 209). The idea is to find out whether the foreground is equally distributed throughout the foreground frame or whether it is concentrated in a few partitions. Being concentrated in only a few partitions is taken as a sign of generic motion, as generic motion is caused by movement of concentrated objects.
In at least still another additional mechanism of the global blob pattern analysis filter bank mechanism, there is provided a distribution match filter mechanism in order to compare blob areas of divided frames further in order to find out distribution of blobs in divided frames. For example, when stage is 1, frame divider mechanism divides the foreground frame into two equal halves (as per step steps 205 and 206). The distribution match filter compares the two areas of the two halves of the foreground frame in order to find their similarity. This is represented by step 211. This similarity can be compared using different techniques. In simple scenarios, ratio or a difference of the blob area may be found. In case of a ratio, an output of 1 signifies the best equal distribution while, in the case of difference, 0 signifies the best equal distribution. In at least another embodiment, other comparison techniques may be employed based on the need of the application. In addition to finding the distribution measure of area, the distribution match filter mechanism co-operates with a weight assignment mechanism in order to give different weights for each loop. There are two important reasons for assigning different weights for different loops:
(c) as the size of data becomes smaller and smaller, there is an increased risk of data being corrupted by noise; and

(d) there might be more blocks without any data when the division becomes smaller - these blocks cannot be given weights equal to their parent blocks. Due to these two reasons the weights assigned are progressively decreased as the value of stage increases.
In other embodiment, other comparison techniques may be employed based on the need of application.
In at least still another additional mechanism of the global blob pattern analysis filter bank mechanism, there is provided an accumulator mechanism in order to receive output(s) from the distribution match filter mechanism till stage becomes equal to the minimum number of stages specified in step 206. This is represented by step 212. When the output of stage increment mechanism (step 206) is true, then the method steps of the global blob pattern analysis filter bank mechanism stops and the accumulator mechanism output is turned into a percentage with respect to the minimum number of stages. This is done by a percentage computation mechanism as represented by step 213. The output of the percentage computation mechanism is provided to a multiplier mechanism which is multiplied by output Tl of the soft thresholding mechanism (step 203) to obtain a final output. The step of multiplier mechanism is represented by step 214. This output is represented by step 215.
In at least one embodiment of the initialization phase, there is provided a mean and variance computation mechanism in order to compute mean and variance of the output (of step 215) over a fixed number of non-motion frames. Ideally, non-motion frames should have a 'zero' output but a variety of noise (camera, illumination etc.) causes it to be non-zero.
In at least one embodiment of the initialization phase, there is provided a noise model computation mechanism. This computed mean and variance output is used to form a noise model. The noise model becomes a reference for further analyses for use by the system and method of this invention.
\n its computation phase, the steps of the mechanism are as described below. After completion of initialization phase which generates a noise model, computation phase is started.
In at least an embodiment of the computation phase, there is provided an updation mechanism in order to continuously update the noise model over time, the noise model being obtained by the initialization phase, mentioned above, in order to mainly account for gradual illumination change. This is however, restricted to only those frames where a high foreground is not detected. Else, this will create a wrong noise model.
The steps as per the global blob pattern analysis filter bank mechanism of the initializing phase are repeated for each and every frame and the mean from the mean and variance computation mechanism is subtracted from the output for current frame to form output difference, There is provided a difference computation mechanism which performs this difference. Further, a first comparator mechanism is employed to compare if

the output difference falls within the variance of mean blob area (as computed by the mean and variance computation mechanism) or if it is negative. If either of this is true, then the difference value (Diff) is set to zero. Else, the difference is analyzed for different cases. The value Diff is an indication of the changes in the frame after noise removal since the mean noise value is already subtracted. Further, a second comparator mechanism is employed to compare if the output difference is less than a preset minimum threshold and if the blob filter output is also less than a preset threshold. If this is true, then the difference value indicates motion in scene. If the difference value is less than the preset threshold and the blob filter output is higher than a preset threshold, then the difference value indicates a likelihood of scene change. An alarm signal is provided to the output generator.
In at least an additional embodiment, there is provided an impulse noise effects' removal mechanism in order to aid removal of false positives due to impulse noises.
In at least one embodiment of the impulse noise effects' removal mechanism, there is provided a mean change computation mechanism in order to compute the mean change of the blob filter output difference.
In at least one embodiment of the impulse noise effects' removal mechanism, there is provided a plotting mechanism in order to plot blob filter output difference over time further in order to compute mean change. In at least one embodiment of the impulse noise effects' removal mechanism, there is provided a gradient determination mechanism in order to determine a gradient which crosses a preset gradient threshold.
In at least one embodiment of the impulse noise effects' removal mechanism, there is provided a plot division
mechanism in order to divide the plot into two parts: 1) before the time of determined high gradient; and 2) after the time of determined high gradient.
In at least one embodiment of the impulse noise effects' removal mechanism, there is provided an additional mean computation mechanism in order to compute mean, separately, for the two divided parts of the plot division mechanism.
In at least one embodiment of the impulse noise effects' removal mechanism, there is provided a mean change determination mechanism in order to determine if the mean change for the two divided plots is less than a preset threshold. If true, then the output (step 215) is suppressed.
Similar activity is carried out for foreground at frame boundary-fusing foreground extraction mechanism) and color coherence features (using color coherence estimation mechanism). The output from these features may take values between [-1,1]. A negative value is indicative of motion (higher the negative value, higher is probability of motion) while a positive value is indicative of scene change (higher the positive value, higher the possibility of scene change).

In accordance with an additional embodiment of this invention, there is provided a color coherence vector output post-processing mechanism. Color Coherence Vector match measure as explained above is calculated for each and every frame and the analysis carried out after that is similar to the analysis carried on the blob filter mechanism. At first, noise model computation mechanism is employed with the color coherence vector to build a noise model. After that, computation phase is carried out. The only difference being that there is no subsequent action on the difference color coherence vector output except for scaling. This mechanism always gives a positive output in case of a change or zero output (approximately) when there is no change.
r
In accordance with an additional embodiment of this invention, there is provided a foreground density at boundary output post-processing mechanism.
As described above, a blob area output is calculated at frame boundary. Afterwards, the method followed is similar to that performed in blob pattern analysis and color coherence vector analysis. The main purpose of this mechanism is to act as a belief multiplier. If the foreground area does not cross some predefined threshold, as set by a user, then the output is highly penalized so that a positive output towards scene change is discouraged. User has freedom to choose the frame boundary, by using a frame boundary definition mechanism, so that the application can be customized as per requirement.
In accordance with an embodiment of this invention, there is provided an alarm generation mechanism in order to generate an alarm upon detection of an event of camera tampering.
Outputs from all these mechanisms are passed to the output generator mechanism. If the output is frame blurring, low contrast, partial or complete obstruction, then the output generator mechanism bypasses the alarm messages to the display. These cases can be investigated with a relatively high accuracy. Hence, no noise model is formed nor a combination of features is used. Scene Change is the most difficult scenario and hence needs special attention. The system and method of this invention uses Global Blob Analysis Filter mechanism, color coherence estimation mechanism, and foreground extraction mechanism - together through configurable weights to generate a combined output. The output so obtained will be more robust against generic motion, tree leaf movements, and impulse noises affecting the performance of such automated electronic surveillance systems hence resulting in less false positives/alarms.
While this detailed description has disclosed certain specific embodiments of the present invention for illustrative purposes, various modifications will be apparent to those skilled in the art which do not constitute departures from the spirit and scope of the invention as defined in the following claims, and it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the invention and not as a limitation.

We claim,
1) A camera view tampering detection system in order to detect tampering events of camera using video
stream that said camera captures, said system comprises:
i. at least a frame decoding mechanism in order to decode frames from said video stream;
ii. at least a contrast detection mechanism in order to detect low contrast in said decoded frames;
iii. at least a frame blurring detection mechanism in order to detect blurring of each of said decoded frames
iv. at least a down sampling mechanism in order to down sample said decoded frames ;
v. at least an illumination noise removal mechanism in order to remove changes or noise caused by a
change in lighting conditions, to obtain at least a current frame; vi. at least a background modeling mechanism in order to extract a background frame or model from a
continuous stream of decoded frames; vii. at least a foreground extraction mechanism in order to extract foreground from said identified
background frame; viii. at least a difference and thresholding mechanism in order to compute the difference of said background frame and said current frame and thresholding said difference frame; ix. at least an illumination noise removal mechanism in order to remove noise and obtaining a de-noised
current frame after removal of noise; x. at least a color coherence estimation mechanism in order to classify colors in a frame into coherent
colors and incoherent colors; and xi. at least a foreground density at frame boundary estimation mechanism in order to determine changes at edge(s) in frames in order to determine changes in frame boundary by computing normalized area of blobs in said foreground frame at a foreground frame boundary; xii. characterized, in that, said system further comprises:
xiii. at least a connected foreground density estimation mechanism in order to eliminate false positives detected by said color coherence estimation mechanism and said foreground density at frame boundary estimation mechanism, said connected foreground density estimation mechanism comprising a filter bank mechanism in order to perform blob pattern analysis on frames; and xiv. at least an output generator mechanism in order to generate positive outputs in relation to camera view tampering identification based on a plurality of parameters as determined by each of said mechanisms.
2) The camera view tampering detection system as claimed in claim 1 wherein, said contrast detection mechanism is a low contrast detection mechanism in order to detect a low contrast in said decoded frames, said mechanism adapted to activate a contrast alarm generation mechanism upon detection of low contrast.
3) The camera view tampering detection system as claimed in claim 1 wherein, said system comprises a blurring alarm generation mechanism in order to generate a blurring alarm upon detection of blurring by said frame blurring detection mechanism.

4) The camera view tampering detection system as claimed in claim 1 wherein, said frame blurring detection mechanism is a Markov Random field's based frame blurring detection mechanism.
5) The camera view tampering detection system as claimed in claim 1 wherein, said illumination noise removal mechanism is a Retinex model based illumination noise removal mechanism.
6) The camera view tampering detection system as claimed in claim 1 wherein, said background modeling mechanism is an exponential updating background modeling mechanism, said exponential updating technique being performed only on those background frames which do not show any motion when analyzed by a block based motion-vector estimation technique.
7) The camera view tampering detection system as claimed in claim 1 wherein, said background modeling mechanism comprising a training mechanism wherein a preset number of frames are used for training in order to confirm a background as a reference image.
8) The camera view tampering detection system as claimed in claim 1 wherein, said color coherence estimation mechanism being a color coherence vector technique based color coherence estimation mechanism adapted to operate on said current frame and said background frame, wherein pixels from said current frame and said background frame are classified as being coherent or incoherent based on their grey level and neighborhood pixel connectivity.
9) The camera view tampering detection system as claimed in claim I wherein, said color coherence estimation mechanism being a color coherence vector technique wherein in case of a scene change, if the color profile throughout the frame has a pre-determined bias, then an alarm generation mechanism determines a tampering event such as specifying partial or complete obstruction.
10) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism is a global blob pattern analysis filter bank mechanism.
11) The camera view tampering detection system as claimed in claim 1 wherein, said system comprising a weight assignment mechanism, with user configurable weights, in order to assign weights to outputs of said color coherence estimation mechanism, said foreground density at background estimation mechanism, and said connected foreground density estimation mechanism in order to take a weighted average of outputs from said color coherence estimation mechanism, said foreground density at background estimation mechanism, and said connected foreground density estimation mechanism, correspondingly.

12) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism
comprising at least a foreground frame receiving mechanism and wherein stage of operation is initialized
to a first stage.
13) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism
comprising:
• at least a first blob area calculator mechanism wherein said foreground frame is provided to said blob area calculator mechanism in order to compute the total area of blobs in said foreground frame; and
• at least a soft thresholding mechanism wherein output of said first blob area calculator mechanism is subjected to said soft thresholding mechanism to generate an output Tl and wherein said soft thresholding mechanism takes the total blob area and checks it for at least the following three conditions:

(a) if the output of said soft thresholding mechanism is less than a pre-defined minimum area limit, then the soft thresholding gives an output of 0;
(b) if the output of said soft thresholding mechanism is greater than the pre-defined minimum area but less than a pre-defined maximum area limit, then the output is a ramp with a pre-defined slope which is dependent on the scaling required on the blob area output. Value of Tl is the output;
(c) if the output of said soft thresholding mechanism is greater than the pre-defined maximum area limit, then the output from the soft thresholding block is 1;
wherein cases a) and c) the system exits with output equal to Tl.
14) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism
comprising:
• at least a first determination mechanism in order to determine if stage is a first stage and also if output is equal to Tl; and
• at least a frame divider mechanism in order to divide the frame into two equal halves, spatially when output of said first determination mechanism is positive.
15) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism
comprising at least a second determination mechanism in order to determine if stage is over by a stage-
over determination mechanism.
16) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism
comprising:
A. at least a first determination mechanism in order to determine if stage is a first stage and also if output is
equal toTl;
B. at least a second determination mechanism in order to determine if stage is over by a stage-over
determination mechanism;

C. characterized, in that, said filter bank mechanism comprising:
D. at least a third determination mechanism in order to determine truth value of said first determination
mechanism and said stage-over determination mechanism such that:
E. if output of said first determination mechanism is true, then said filer bank mechanism takes said
foreground frame as data input; or
F. if output of said second determination mechanism is true, then said filer bank mechanism takes data
from already divided frames.
17) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism comprising at least a stage increment mechanism adapted to increment stage after division of frames.
18) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism comprising at least a first blob area calculator mechanism in order to calculate blob area for each half of two equally divided said foreground frame, in a first loop, in order to find out whether the foreground is equally distributed throughout said foreground frame or whether it is concentrated in a few partitions, since being concentrated in only a few partitions is a sign of generic motion,
19) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism comprising at least a second blob area calculator mechanism in order to calculate blob area for further equally divided blocks, of each of earlier divided blocks, independently, in subsequent loops, in order to find out whether the foreground is equally distributed throughout said foreground frame or whether it is concentrated in a few partitions, since being concentrated in only a few partitions is a sign of generic motion.
20) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism comprising at least a distribution match filter mechanism in order to compare blob areas of divided frames in order to find out distribution of blobs in divided frames, wherein, if the distribution match criteria is ratio output of 1 signifies the best equal distribution while, and if the distribution match critera is difference, then 0 signifies the best equal distribution.
21) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism
comprising at least a distribution match filter mechanism co-operating with a weight assignment mechanism
in order to give different weights for each stage such that:
(a) as the size of data becomes smaller and smaller, there is an increased risk of data being corrupted by noise; and
(b) there might be more blocks without any data when the division becomes smaller - these blocks cannot be given weights equal to their parent blocks.

22) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism
comprising:
I. at least an accumulator mechanism in order to receive output(s) from a distribution match filter mechanism till stage becomes equal to the minimum number of stages specified in said system;
II. at least a percentage computation mechanism in order to turn output of said accumulator mechanism into a percentage with respect to minimum number of stages;
III. at least a soft thresholding mechanism wherein output of said first blob area calculator mechanism is subjected to said soft thresholding mechanism to generate an output Tl;
IV. at least a multiplier mechanism in order to multiple output of said percentage computation mechanism with output Tl of soft thresholding mechanism to obtain a final output.;
V. at least a mean and variance computation mechanism in order to compute mean and variance of output of said multiplier mechanism over a fixed number of non-motion frames;
VI. at least a difference computation mechanism in order to compute a difference between mean from said mean and variance computation mechanism from the output for current frame to form output difference, iterativefy; VII. at least a first comparator mechanism in order to compare if said computed difference falls within the variance of mean blob area (as computed by said mean and variance computation mechanism) or if it is negative and setting value to zero, if either is true or analysing difference for different cases if false: and VIII. at least a second comparator mechanism in order to compare if the output difference is less than a preset minimum threshold and if the blob filter output is also less than a preset threshold and in order to confirm motion in scene and if output difference is less and if blob filter output is high in order to confirm likelihood of scene change.
23) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism comprising at least a noise model computation mechanism in order to compute mean and variance output to form a noise model, said'noise model being a reference for further analyses for use by said system.
24) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism comprising an updation mechanism in order to continuously update said noise model over time, said noise model being obtained by an initialization phase in order to account for gradual illumination change, said initialization phase being restricted to only those frames where a high foreground is not detected.
25) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism comprising at least an impulse noise effects' removal mechanism in order to aid removal of false positives due to impulse noises.

26) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism comprising at least a mean change computation mechanism in order to compute mean change of blob filter output difference.
27) The camera view tampering detection system as claimed in claim I wherein, said filter bank mechanism comprising at least a plotting mechanism in order to plot blob filter output difference over time and in order to compute mean change.
28) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism comprising at least an impulse noise effects' removal mechanism in order to aid removal of false positives due to impulse noises characterized in that said impulse noise effects' removal mechanism comprising at least a gradient determination mechanism in order to determine a gradient which crosses a preset gradient threshold.
29) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism comprising at least an impulse noise effects' removal mechanism in order to aid removal of false positives due to impulse noises characterized in that said impulse noise effects' removal mechanism comprising at least a plot division mechanism in order to divide the plot into two parts: 1) before the time of determined high gradient; and 2) after the time of determined high gradient.
30) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism comprising at least an impulse noise effects' removal mechanism in order to aid removal of false positives due to impulse noises characterized in that said impulse noise effects' removal mechanism comprising at least an additional mean computation mechanism in order to compute mean, separately, for the two divided parts of the plot division mechanism.
31) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism comprising at least an impulse noise effects' removal mechanism in order to aid removal of false positives due to impulse noises characterized in that said impulse noise effects' removal mechanism comprising at least a mean change determination mechanism in order to determine if the mean change for the two divided plots is less than a preset threshold in order to suppress output from blob filter, if true.
32) The camera view tampering detection system as claimed in claim 1 wherein, said filter bank mechanism comprising a value determination mechanism wherein a positive value from said filter bank mechanism is indicative of camera tampering and a negative value from said filter bank mechanism is indicative of motion.

33) The camera view tampering detection system as claimed in claim I wherein, said system comprising a noise model computation mechanism in order to build a noise model and further comprising a color coherence vector output post-processing mechanism in order to calculate color coherence vector match measure for each and every decoded frame and to further compute a positive output of camera tampering in case of a change or compute a zero output (ideal) when there is no change.
34) The camera view tampering detection system as claimed in claim 1 wherein, said system comprising a frame boundary definition mechanism in order to assess if its foreground area is below or above a predefined threshold as defined by a user in order to provide a negative or positive corresponding output respectively to the output generator of camera tampering.
35) The camera view tampering detection system as claimed in claim 1 wherein, said output generator mechanism comprising an alarm generation mechanism in order to generate an alarm upon detection of an event of camera tampering.

Documents

Application Documents

# Name Date
1 ABSTRACT1.jpg 2018-08-11
2 807-MUM-2014-FORM 3.pdf 2018-08-11
3 807-MUM-2014-FORM 26(30-5-2014).pdf 2018-08-11
4 807-MUM-2014-FORM 2.pdf 2018-08-11
5 807-MUM-2014-FORM 2(TITLE PAGE).pdf 2018-08-11
6 807-MUM-2014-FORM 1.pdf 2018-08-11
7 807-MUM-2014-FORM 1(10-9-2014).pdf 2018-08-11
8 807-MUM-2014-DRAWING.pdf 2018-08-11
9 807-MUM-2014-DESCRIPTION(COMPLETE).pdf 2018-08-11
10 807-MUM-2014-CORRESPONDENCE.pdf 2018-08-11
11 807-MUM-2014-CORRESPONDENCE(30-5-2014).pdf 2018-08-11
12 807-MUM-2014-CORRESPONDENCE(10-9-2014).pdf 2018-08-11
13 807-MUM-2014-CLAIMS.pdf 2018-08-11
14 807-MUM-2014-ABSTRACT.pdf 2018-08-11