Sign In to Follow Application
View All Documents & Correspondence

A Surveillance Camera And System For Environments With Power And Network Disruptions

Abstract: A surveillance system and surveillance camera designed for operating in environments with power and network disruptions is disclosed. The surveillance camera  in the case of a power disruption  reduces the power consumption and in the case of a network disruption  reduces the storage requirements and the network bandwidth requirements through novel methods  enabling it to operate in environments with power and network disruptions

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 December 2011
Publication Number
25/2013
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2021-06-30
Renewal Date

Applicants

Silvan Innovation Labs
No.7 2nd Floor  10th Main  Jeevan Bhima Nagar Main Road  Bangalore

Inventors

1. Nandakumar Raghavan
Ramaneeyam  Palath (PO)  Calicut-673611  Kerala
2. Mohan Gopalkrishna
22A  2nd Cross  Judicial Officers Layout  Sanjay Nagar  Bangalore-560094

Specification

This application claims priority from the Indian provisional patent application No 3853/CHE/2010 filed on Dec. 16  2010

FIELD OF THE INVENTION

This disclosure relates to surveillance systems and in particular to surveillance cameras operating in environments with power and network disruptions.

BACKGROUND OF THE INVENTION

Surveillance systems are widely used to observe  track and provide alarms/inputs for taking timely action in facilities  public places and important locations such as factories  malls  banks  stadiums  airports  bus and rail stations  roads etc. One of the important devices in surveillance systems are cameras operating in the visual and out of visual spectrum. These may be referred to as surveillance cameras. With advancements in network technology  the surveillance system components such as cameras  fire and smoke detectors  recorders  remote viewers  etc. are interconnected through (data) networks  for example an IP (Internet Protocol) based network.

All such surveillance systems are critically dependent on the availability of power and network resources. In various environments  where these resources are prone to disruptions (non availability of the specified resource) due to various outages  the effectiveness of the surveillance systems suffers. If there is no power (black outs) or if there is in-correct power (brownouts)  the limited back up that is normally available forces the surveillance system to be shut down in a short period of time. If there are network disruptions  the effectiveness of the system suffers  and can even lead to total loss of surveillance capabilities. The power and network disruptions may be caused intentionally (for example by criminal acts) or unintentionally (such as power cuts  failures of parts (such as breakers  fuses  cables  etc.) in the power system  failure of devices (such as switches  routers  cables  access points  etc.) in the network  network disruption caused by factors such as congestion  etc.
In all these cases  even when there are battery backups  within a limited period of the outages  all the data that should have been available from the cameras (and other sensors such as fire  smoke etc.) are lost due to the cameras and network equipment not operating (in case of power disruption) or the data not being transferred from the cameras (due to network disruption).

SUMMARY OF THE INVENTION

The surveillance camera and system in this disclosure overcomes these drawbacks in surveillance systems. The surveillance camera in this disclosure has built in power backup (for example  rechargeable batteries) and power management which ensures that the camera continues to operate for longer periods  in case of power failure. The power management in the camera identifies and shuts down in stages various less critical functions within the camera  to reduce the power consumption. If the power disruption continues beyond a pre determined period  the power consumption is brought down to a bare minimum by operating only a low resolution content capture at a low frame rate  which is stored in the camera. Thus the surveillance continues to operate for extended periods which are much longer than the period that would have been available otherwise.

The camera stores the content in the camera and transfers the stored content to the system as and when the required resources (power and network resources) are available. The frame rate of the video in the stored content may be varied to increase the duration for which the images may be stored. This helps in making available the surveillance content for the period for which there are power or network disruptions.

Similarly  in the case of reduction in network bandwidth  the bit rate of the data transfer between the surveillance camera and the system is reduced progressively to ensure real time content transfer continues (albeit at a lower resolution). The change in the video bit rate is done on the fly  i.e. without any need to stop the camera operations to reconfigure the bit rate. While the real time data transfer takes place at reduced bit rates  the content of the desired resolution is stored in the camera and transferred to the system when bandwidth becomes available  as detailed below.

Thus the surveillance camera is enabled to operate in environments with power and network disruptions.

BRIEF DESCRIPTION OF DRAWINGS

Example embodiments will be described with reference to the following accompanying drawings  which are described briefly below.

Figure 1 is an example environment in which several aspects of the present invention may be implemented.

Figure 2 is a block diagram of a surveillance camera in an embodiment.

Figure 3 is a flowchart of a surveillance camera operating in environments with network and power disruptions.

Figure 4 is a flowchart of an example approach for analysing power status of the surveillance camera.

Figure 5 is a flowchart of an example approach for analysing storage status of the surveillance camera.

Figure 6 is a flowchart of an example approach for analysing network status of the surveillance camera.

Figure 7 is a flowchart of an example approach for analysing network status in the VMS.
Figure 8 is a portion of the configuration data sent from VMS to the surveillance camera  to change stream data rate through DCR.

DETAILED DESCRIPTION OF THE DRAWINGS

In the drawings  like reference numbers generally indicate identical  functionally similar  and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number. The description is continued with an example implementation  with references to the accompanying drawings.

Several aspects of the invention are described below with reference to examples for illustration. It should be understood that numerous specific details  relationships  and methods are set forth to provide a full understanding of the invention. One skilled in the relevant art  however  will readily recognize that the invention can be practiced without one or more of the specific details  or with other methods  etc. In other instances  well known structures or operations are not shown in detail to avoid obscuring the features of the invention.

Figure 1 is a block diagram of an example environment in which several aspects of the present invention may be implemented. The system there is shown containing cameras 101-103  Access Point 110  Local Terminal 115  Switch 125  Router 130  Video Server 135  Analog sensors 140  VMS (Video Management System) 145  Internet 160 and Remote Terminal 170. Each block is described in further detail below.

The block diagram is shown containing only representative systems for illustration. However  real-world environments may contain more/fewer/different systems/ components /blocks  both in number and type  depending on the purpose for which the environment is designed  as will be apparent to one skilled in the relevant arts. For example  though only three cameras are shown  a surveillance system may contain tens or hundreds of cameras. Though two cameras are shown communicating wirelessly with the switch (through Access Point 110) while one camera is shown communicating using wired medium  all cameras in a surveillance system may be communicating wirelessly whereas in another surveillance system  all cameras may be communicating using wired medium. The local terminal is shown communicating through a wireless medium though in other systems  the Local Terminal may be communicating through a wired medium.

Access Point 110  Switch 125  and Router 130 constitute the well known components of a Network for the surveillance system. Access Point 110 connects to wireless devices such as cameras 101-102 and Local Terminal 115 over a wireless network  for example a wireless network based on well known wireless Ethernet technology standards such as IEEE 802.11. Surveillance camera 103 is connected to the surveillance system through Switch 125 over a wired network  for example based on Ethernet technology standards such as IEEE 802.3. While Switch 125  Router 130  Video server 135  Analog sensors 140 and VMS 145 are shown connected to the surveillance network using cables  it may be appreciated that they may also be connected using wireless or a combination of wired and wireless networking technologies  as is well known in the arts.

Video Server 135 is used to interface analog video cameras (not shown) to the surveillance system. The analog video is digitized and converted into a format compatible with the network protocols being used in the surveillance network so that the converted video is available to the surveillance network.

Analog sensors 140 represent various other well known sensors such as fire and smoke detectors  flood alarms  etc  which form part of the surveillance system. It may be noted that many surveillance systems may also use such sensors providing digital outputs.

VMS 145 accepts content from surveillance cameras and inputs from other sensors  and manages access to the surveillance data (content and other inputs). VMS may also provide services such as recording the surveillance data  analyzing the surveillance data (may be online or offline)  etc.

VMS 145 may receive content from surveillance cameras 101-103 over a network comprising wired and wireless components (such as Access Point 110  Switch 125  and Router 130 and cables). If one or more of VMS  the network resources  etc. become disrupted due to various causes such as a power outage  equipment fault  criminal activity  etc.  VMS 145 may not receive the content from the surveillance cameras 101-103. On ending of the disruption and resumption of working of the affected devices or resources  VMS 145 may connect to the surveillance camera and down load the content of the duration for which the disruption occurred  as described below.

At other times  the network resources may become degraded (due to the factors listed above or other reasons)  resulting in reduction of the available network bandwidth and hence the ability to receive the content in full. At pre-determined intervals (for example  every minute) VMS 145 may measure and inform the connected surveillance camera(s) of the available bandwidth. If the available bandwidth has reduced  the affected surveillance camera(s) may reduce the content resolution so as to restrict the bandwidth requirement for content transfer to the available bandwidth as determined by VMS 145 and thus maintain the content transfer to the surveillance system. Meanwhile  the surveillance camera may store the content in the desired resolution (the resolution set when there is no reduction in the available network bandwidth due to any disruption) in the local storage associated with the surveillance camera. When the network bandwidth is restored  this stored video may be transferred to VMS 145.

Internet 130 represents a conglomeration of one or more constituent networks providing connectivity between cameras 101-103  Local Terminal 115  Video Server 135  Analog sensors 140  VMS (Video Management System) 145 and Remote Terminal 170. Internet 130 may be implemented using protocols such as Internet Protocol (IP) well known in the relevant arts  with each of the systems also potentially operating consistent with IP.

Remote Terminal 170 may be used to access the surveillance system from a remote location. The system may be accessed to monitor  control and configure the surveillance system or the constituent components  for example  using a web browser. Cameras 101-103 also may be accessed and controlled from the Remote Terminal. Local terminal 115 performs locally the functions that the Remote Terminal 170 performs remotely over the Internet. Remote Terminal 170 and Local terminal 115 may be any device which may be connected to a network over a wired connection or a wireless connection  such as a Personal Computer  PDA  smart phone  iPad  etc.

Cameras 101-103 are surveillance cameras. Cameras 101 and 102 are connected to the surveillance system over a wireless network. Camera 103 is connected to the surveillance system over a wired network connection. Several cameras may be positioned in various strategic locations of the area under surveillance and the content generated by them may be transferred to a central location for viewing  archiving and further processing of the video images. In this document  the term camera is used to refer to a surveillance camera (a camera used in a surveillance system).

The description is continued with the block diagram of a surveillance camera in an embodiment of the present invention.

Figure 2 is a block diagram of a surveillance camera (such as cameras 101-103)  illustrating an example embodiment in which several aspects of the present invention may be implemented. Camera 101 is shown containing lens enclosure 205  lens assembly 206  image sensor array 207  Mic 210  Audio I/F (Interface) 211  Processor 201  RAM 215  Video Amp 220  Storage 225  Network I/F 230  Other I/F 235  Power Control 260  Power Supply 265 and Battery 270. For conciseness and ease of comprehension  only those components which are relevant to the understanding of the operation of the example embodiment are included and described. Each component of Figure 1 is described below in detail.
Lens enclosure 205  denoted by dotted lines  is shown housing lens assembly 206 and image sensor array 207. The lens enclosure prevents extraneous (i.e.  other than the light that passes through the lens assembly) light from falling on Sensor Array 207 (for example CCD  CMOS  IR/thermal etc.). Lens assembly 206 may contain one or more lenses  which may be configured to focus light from a scene to fall on image sensor array 207.

Sensor Array 207 may contain an array of sensors  with each sensor generating an output value representing the corresponding point (pixel) of the image  and proportionate to the amount of light that is allowed to fall on the sensor. The output of each sensor is converted to a corresponding digital value (for example  in RGB format).The digital values produced by the sensors are forwarded to processor 201 for further processing. Mic 210 and Audio I/F 211 together capture the audio signals being produced in the area under surveillance by camera 101  convert the captured audio signal into corresponding digital values and forward the digital values to processor 201 for further processing. It may be noted that surveillance camera 101 may be built without Mic 210 and Audio I/F 211 where capture of audio signals is not required.

RAM 215 stores program (instructions) and/or data used by processor 201. Pixel values received from sensor array 207 for processing may be stored in RAM 215 by processor 201.

Video Amp 220 converts the digital values of pixels into an image in analog form (such as RGB). The analog image is provided as an output to a connector such as a BNC connector to which devices which accept analog video (such as an analog monitor) may be connected.

Network I/F 230 provides connectivity to a network using various protocols (e.g.  Internet Protocol IP)  and may be used to receive/transmit images and commands. The net work interface may be designed to work wirelessly (for e.g. cameras 101 & 102 above) or using wired medium (for e.g. camera 103 above). Other I/F 235 may consist of interfaces for various other sensors such as fire detectors  smoke detectors  etc. The data from these sensors may also be integrated into the content provided by the camera.

Storage 225 may contain one or more non-volatile memories and may store content  which may include video  audio  events  other sensor output (surveillance cameras may aggregate the output of other sensors such as fire and smoke detectors  flood alarms  etc.) received from processor 201. In an embodiment  storage 225 is implemented as a flash memory. Alternatively  storage 225 may be implemented as one or more removable plug-in card (e.g. SD  SDHC  microSD  etc.) or Hard Disk Drives  etc. well known in the arts.

Storage 225 may also contain additional memory units (e.g. ROM  EEPROM  HDD etc.)  which store various instructions  which when executed by processor 201 provide various features of the invention described herein. RAM 215 and storage 225 (together or individually) represent examples of memory units from which Processor 201 may access data (images) and instructions to provide various features of the present invention.

Power Supply 265 takes the mains power from the grid  converts it into the voltages (generally low voltage DC) required for the camera  as well as for charging Battery 270 (wherever rechargeable batteries are used). Power Supply 265 may be implemented in a well known manner using technologies such as Switched Mode Power Supply (SMPS)  transformer/rectifier  etc. The power may also be supplied through other means such as PoE (Power over Ethernet)  etc. well known in the arts.

Battery 270 provides the power for the camera when the power from the normal source (grid power  POE etc.) is not available. Battery 270 may be rechargeable (for example  NiCd  NiMH  Lead Acid  Li-ion  Li-polymer etc well known in the arts).

Power Control 260 may perform the power control functions to make available the battery power when grid power is not available  maintain the charge in the battery when grid power is available  maintain a data base of grid power failures and their duration so that this information may be used to decide on the subsystems in the camera that may be run on battery power (when grid power is not available) to maintain the camera output (images) for the longest possible duration in the absence of grid power.

Processor 201 may execute instructions stored in memory (such as RAM 215 or storage 225) to provide several features of the present invention. Processor 201 may contain multiple processing units  with each processing unit potentially being designed for a specific task such as video streaming  video compression  etc. Alternatively  Processor 201 may contain only a single general purpose processing unit.

Processor 201 may process the content such as encoding the video using well known techniques such as [M]JPEG  MPEG 4  H.264  etc.). Processor 201 may also analyse the content to detect events of interest (such as no change in the scene for a number of successive frames  etc.) using techniques such as motion detection  etc. well known in the arts.

Processor 201 may operate to analyse the power and network bandwidth status and determine the actions in case of disruptions to enable the surveillance camera to provide the surveillance content for a much longer period  as described below with examples.

Figure 3 is a flowchart illustrating the manner in which a surveillance camera and system may operate in environments with power and network disruptions. The flowchart is described with respect to Figures 1and 2  merely for illustration. However  various features can be implemented in other environments and other components. Furthermore  the steps are described in a specific sequence merely for illustration.

Alternative embodiments in other environments  using other components  and different sequence of steps can also be implemented without departing from the scope and spirit of several aspects of the present invention  as will be apparent to one skilled in the relevant arts by reading the disclosure provided herein. The flowchart starts in step 301  in which control passes immediately to step 310.

In step 310  processor 201 analyses the power status. The availability of power from the normal source is checked. If the power from the normal source is not available  the power consumption is reduced to prolong the operation of the surveillance camera  as described in sections below.

In step 320  processor 201 analyses the storage status. Processor 201 checks whether there are changes in the scene from frame to frame. If there are no changes between a large number of frames  the storage is changed to a low frame rate as described in later sections.

In step 330  image processor 201analyses the network status. Processor 201 checks whether the network resources are available. Processor 201determines the available network bandwidth and changes the data transfer rates (by changing the video quality and frame rate in the content) so that the data transfers can take place within the available bandwidth  as described below.

If the network resources are not available  or if the network bandwidth is lesser than that required for transfer of content with the selected resolution (video quality  frame rate  etc.) processor 201 acts to store the content in the built in storage of the camera (such as storage 225). The content is transferred to the surveillance system (through VMS 145) when the network resources are restored. The stored data may be transferred through a channel separate from the channel used for transferring real time content. In an embodiment  the stored data is transferred by a secured FTP server in the surveillance camera over a lossless channel (using the TCP protocol) implemented in a manner well known in the arts. The flowchart ends in step 399.

The description is continued with the manner in which processor 201 analyses the power status  storage status and the network status  with examples.

Figure 4 is a flowchart illustrating an example approach to analyzing the power status of a surveillance camera for environments with power and network disruptions. The flowchart is described with respect to Figures 1-3 merely for illustration. However  various features can be implemented in other environments and other components. Furthermore  the steps are described in a specific sequence merely for illustration.

Alternative embodiments in other environments  using other components  and different sequence of steps can also be implemented without departing from the scope and spirit of several aspects of the present invention  as will be apparent to one skilled in the relevant arts by reading the disclosure provided herein. The flowchart starts in step 401  in which control passes immediately to step 410.

In step 410  processor 201 checks whether power from the Normal source available. The normal supply may be from the power grid or from a DC supply rail  POE etc. which allows the rated power to be drawn continuously  till there is some disruption such as cable getting cut  fuse blowing  etc. The disruption may be accidental  such as natural calamities  etc. or deliberate  such as acts of sabotage  etc.

The backup source may be in the form of batteries such as battery 270 which may provide power for a limited period of time. Since the backup power is limited  it may be desirable to reduce the power consumption as much as possible so that the backup source lasts as long as possible. The surveillance camera may reduce the power consumption by stopping one or more of software and hardware components.

In an example embodiment  once the normal source is not available  the surveillance camera may be operated in a low power mode or ultra low power mode. In the low power mode  software components such as web server  streaming server and all encoders and encoding pipelines except one  are stopped. Similarly  hardware components such as the networking subsystem  video out signal  VPBE (Video Processing Back End) and all encoders except one  are stopped. This results in turning the camera into a single channel capture  encode and store device. In the example embodiment  the low power mode was observed to reduce the power consumption to 18% to 23% of the normal mode (when all the software and hardware components are running).

In the ultra low power mode  the video resolution and frame rate (frames per second – fps) are brought down to further reduce the power consumption. In the example embodiment  the video resolution is brought down from High Definition (which may be 1080p  60fps or 1080p  30 fps or 720p  30 fps depending on the settings for a particular installation) to VGA resolution (640 by 480 pixels)  24 fps. The power consumption may be reduced still further by reducing the system clock to a level just enough to support this reduced resolution and frame rate. The ultra low power mode was observed to reduce the power consumption to approximately 30% of that of the low power mode (or approximately 5.4% to 6.9% of the normal mode).

If the power from the normal source is available  control passes to step 440. If power from the normal source is not available  processing continues to step 420.

In step 420  processor 201 checks whether the remaining battery capacity (backup source capacity) is less than a first pre determined level L1. Once the normal source is not available  it is desirable to conserve the backup power. However  if the disruption in the normal source is for a short period of time  it may not be desirable to reconfigure the surveillance camera (as described above) to reduce power consumption. Therefore  processor 201 waits till the remaining capacity of the backup source falls below the first pre determined level L1 before changing the camera mode to the low power mode. In an example embodiment  the first predetermined level L1 is set at about 60% of the capacity of the backup source.

If the capacity is less than the first pre determined level L1  processing continues to step 430. If the capacity is not less than the first pre determined level L1  control passes to step 440.

In step 430  processor checks whether the remaining battery capacity is less than a second pre determined level L2. When the capacity of the battery falls below the second predetermined level L2  it indicates that significant capacity of the backup source has been used up  and hence there is a necessity to further reduce the power consumption of the surveillance camera (by changing the camera mode to ultra low power mode). In an embodiment  the second pre determined level L2 is set at about 30% of the capacity of the backup source.

If the backup source capacity is less than the second predetermined level L2  control passes to step 460. Otherwise  processing continues to step 450.

In step 440  processor 201 changes the mode of the surveillance camera to normal mode  if not already in normal mode. The control then passes to step 499.

In step 450  processor 201 changes the mode of the surveillance camera to low power mode  if not already in low power mode. The control then passes to step 499.
In step 460  processor 201 changes the mode of the surveillance camera to ultra low power mode  if not already in ultra low power mode. The control then passes to step 499. The flow chart ends in step 499.

In the example embodiment  it has been observed that using the pre determined levels L1  L2 and the associated low power/ultra low power modes described above  the duration for which the backup power supply is available may be increased to about 2.5 times for a wired camera and to about 5 times for a wireless camera when compared to not switching to the low power/ultra low power modes (that is  continuing in the normal mode even when the normal source is not available).

It may be appreciated that the software/hardware components that are turned off and the first and second pre determined levels L1  L2 may be set to achieve specific requirements of the end user of the surveillance system. For example  if longer duration of operation is the objective  the first and second pre determined levels may be set to higher values. If better quality content is the objective  instead of VGA resolution  the video may be set at a higher resolution  and so on.

The description is now continued with the manner in which processor 201 analyses the storage status  with examples.

Figure 5 is a flowchart illustrating an example approach to analyzing the storage status of a surveillance camera for environments with power and network disruptions. The flowchart is described with respect to Figures 1-4 merely for illustration. However  various features can be implemented in other environments and other components. Furthermore  the steps are described in a specific sequence merely for illustration.

Alternative embodiments in other environments  using other components  and different sequence of steps can also be implemented without departing from the scope and spirit of several aspects of the present invention  as will be apparent to one skilled in the relevant arts by reading the disclosure provided herein. The flowchart starts in step 501  in which control passes immediately to step 510.

In step 510  processor 201checks whether the scene has changed in the previous n successive frames  where n is a pre determined number. The changes in the scene may be determined using video analysis (motion detection) well known in the arts. If there is no change in the scene from frame to frame  there is no need to store the unchanged frames again and again. However  for various reasons such as error correction  etc.  it may be advisable to store a frame in every occurrence of a pre determined time interval  say N seconds. This may be referred to as dynamic frame rate  where as long as the scene doesn’t change  the camera stores one frame every N seconds.

To ensure that there are no frequent jumps in the content  processor 201 waits for the pre determined number of successive frames (n) before adopting the dynamic frame rate.

In an example embodiment  the normal storage is at 30 fps (referred to as high quality mode). When the scene doesn’t change for 300 successive frames (n=300)  the dynamic frame rate is applied  where 1 frame is stored every one second (N= 1 second). It has been observed that with these values for n and N  considerable savings in storage space may be achieved. For example  for an office parking lot  a saving of 77 %  for an office corridor  a saving of 68% and for a street in front of a house in the suburbs  a saving of 55 % was observed whereas for a busy street  a 12% saving in storage was observed.

It may be appreciated that the values of n and N may be changed appropriately to achieve more savings (with probably some jumps in the content) or a smoother content  etc.

If the scene has changed in the previous n successive frames  processing continues to step 520. Otherwise  control passes to step 540.

In step 520  processor 201 changes the storage to high quality mode  if not already in high quality mode. Then control passes to 599.

In step 540  processor 201 changes the storage to dynamic frame rate mode  if not already in dynamic frame rate mode. Then control passes to step 599. The flow chart ends in step 599.

The description is now continued with the manner in which processor 201 analyses the network status  with examples.

Figure 6 is a flowchart illustrating an example approach to analyzing the network status of a surveillance camera for environments with power and network disruptions. The flowchart is described with respect to Figures 1-5 merely for illustration. However  various features can be implemented in other environments and other components. Furthermore  the steps are described in a specific sequence merely for illustration.

Alternative embodiments in other environments  using other components  and different sequence of steps can also be implemented without departing from the scope and spirit of several aspects of the present invention  as will be apparent to one skilled in the relevant arts by reading the disclosure provided herein. The flowchart starts in step 601  in which control passes immediately to step 620.

In step 620  processor 201 sends the data snippet(s) to VMS 145. The data snippets are blocks of data that is stored in the surveillance camera. When the surveillance camera is switched on  the details of the stored data snippet(s) such as its uri (uniform resource identifier)  size  checksum  etc. are sent to the VMS.

The data snippets are used to determine the band width of the network channel from the surveillance camera to the VMS (and hence to the surveillance system). The data snippets are transmitted to the VMS periodically. The VMS checks the integrity of the received data (using the checksum) and if the received data integrity is preserved  computes the data transfer speed (and hence the network bandwidth). If the available bandwidth is not sufficient to support the content stream(s) from the surveillance camera to VMS 145  the resolution and/or the frame rate of the content stream(s) from the surveillance camera may be changed  so that the available band width can support the content streams.

In an example embodiment  the data snippet is of 10 KB size. The data snippet is sent from the surveillance camera once every minute. The frame rates may be brought down from 60 fps/30 fps down to 1 fps and the resolution from 1080p/720p to 480p. The changes are done on the fly  as described above.

In step 630  processor 201 receives a band width adaptation command from the VMS. In step 640  processor 201 examines the received command for new parameters. If new parameter(s) have been received  processing continues to step 650. Otherwise  control passes to step 699.

In step 650  processor 201 applies the new parameters received to change the content stream(s) data rate. In the example embodiment  the stream data rate is changed by changing the resolution and/or the frame rate (as explained above) on the fly by applying the new parameters through DCR (Dynamic Codec Reconfiguration). The codecs used in the surveillance camera may be reconfigured without interrupting the encoding operation  thus enabling DCR. The camera may pass on the details of the change (the new parameters of the content stream  the time stamp from when the change is effective  etc.) to VMS. The flow chart ends in step 699.

It may be noted that the change in resolution and/or the frame rate is effective only for the streamed data (which is in real time). The content stored in the storage continues to be in the resolution and/or the frame rate which was set  as the storage is not affected by the network bandwidth. The stored data may be accessed by VMS 145  as described earlier.

The description is now continued with the manner in which VMS 145 analyses the network status  with examples.

Figure 7 is a flowchart illustrating an example approach to analyzing the network status of the VMS working with a surveillance camera for environments with power and network disruptions. The flowchart is described with respect to Figures 1-6 merely for illustration. However  various features can be implemented in other environments and other components. Furthermore  the steps are described in a specific sequence merely for illustration.

Alternative embodiments in other environments  using other components  and different sequence of steps can also be implemented without departing from the scope and spirit of several aspects of the present invention  as will be apparent to one skilled in the relevant arts by reading the disclosure provided herein. The flowchart starts in step 701  in which control passes immediately to step 720.

In step 720  VMS 145 receives the data snippet(s) from the camera and computes the data transfer rate. As described above  when the surveillance camera is switched on  processor 201 sends the details of the data snippet(s) to the VMS. The snippets send by the surveillance camera in step 620 is received by the VMS in step720. VMS 145 computes the network bandwidth as described above from which the data transfer rate may be computed.

In step 730  VMS 145 checks whether the data transfer rate is sufficient to support the current content stream(s). The VMS may compare the available network bandwidth with the total bandwidth required for the current content stream(s). If the available data transfer rate is sufficient  processing continues to step 735. Otherwise  control passes to step 750.

In step 735  VMS 145 checks whether a higher rate content stream is available. If it is available  processing continues to step 740. Otherwise  control passes to step 799.
In step 740  VMS 145 checks whether the data transfer rate is sufficient to support a higher rate content stream. If it is sufficient  processing continues to step 745. Otherwise  control passes to step 799.

In step 745  VMS 145 switches to a higher rate data stream. VMS 145 may send a bandwidth adaptation command with the appropriate parameters to effect the switching. Then processing continues to step 799.

In step 750  VMS 145 checks whether a lower rate content stream is available. If it is available  processing continues to step 755. Otherwise  control passes to step 760.

In step 755  VMS 145 switches to a lower rate data stream. VMS 145 may send a bandwidth adaptation command with the appropriate parameters to effect the switching. Then processing continues to step 799.

In step 760  VMS 145 checks whether the surveillance camera has been changed to the lowest bit rate through DCR. If it has been changed to the lowest bit rate  control passes to step 799. Otherwise  processing continues to step 770.

In step 770  VMS 145 sends a request to the camera to reduce the bit rate through DCR. VMS 145 may send a bandwidth adaptation command with the appropriate parameters to effect the switching. The flowchart ends in step 799

Figure 8 depicts a portion of a configuration data  sent from VMS to the surveillance camera such as surveillance cameras 101-103  to change stream data rate through DCR  as described earlier  in an embodiment. Though the content is shown encoded in well known extensible markup language (XML) according to one convention  other encoding/formats and conventions may be used for representing the data.

Line 805 (<StreamingChannel version="1.0" xmlns="urn:silvan-dhruv">) indicates that the configuration data version is “1.0” and that the xmlns defines the xml namespace specific to this device class. This definition will provide a first part of the uniform resource name. Lines 807 (<channelName>Input 1 MPEG-4 ASP</channelName>) identifies the input channel name as “Input 1 MPEG-4 ASP”. Line 809 indicates whether this particular streaming channel is currently enabled. Many streaming channels may be available  of which only a few may be enabled at any given time  depending on the requirements at that time. The video specific information (all the details of the video part of this channel) of this particular streaming channel is given within <video> and </video> tags. Line 815 (<videoInputChannelID>2</videoInputChannelID>) identifies the videoInputChannelID as “2”. In the surveillance camera there may be many videoChannelIDs (each of which identifies a video definition set) shared among different streaming channels.

Lines 817 to 827 specifies the video parameters such as the encoding scheme (H264  MPEG4  MJPEG  etc)  scan type (progressive or interlaced)  width and height of a frame in the sequence and the top left position of the frame of the rendered video after decoding. In the example XML string given these are respectively given as MPEG4  Progressive Scan  640  480  0 and 0.

Line 829 (<videoQualityControlType>CBR</videoQualityControlType>) relates to the bit rate (for example  VBR (variable bit rate)  CBR (constant bit rate)  CVBR (Constant variable bit rate – a hybrid scheme)  etc.) used during the encoding. If the videoQualityControlType chosen is CBR (as in line 829)  line 831 gives the bit rate used in Kbps. Line 835 specifies the maxFrameRate (Frame rate) in 100s (to make it possible to specify fractional frame rate for example  in PAL scheme -29.97). In line 835  maxFrameRate (2500) is 25. Similarly key frame interval is 10 in –line 837 (keyFrameInterval -1000). Line 839 specifies rotationDegree as 0 (In an example embodiment  the possible values are 0  90  180  270  though other values also may be used)  Line 841 specifies mirrorEnabled as false (out of vertical  horizontal  true and false in an example embodiment). Line 843 specifies snapShotImageType as JPEG. Other well known formats such as PNG also may be used. Lines 845 and 847 are xml tags denoting the end of video and Streaming channel schema.

Thus  using the techniques described above  surveillance cameras and systems can be adapted to be used in environments with power and network disruptions.

While various embodiments of the present invention have been described above  it should be understood that they have been presented by way of example only  and not limitation. Thus  the breadth and scope of the present invention should not be limited by any of the above described exemplary embodiments  but should be defined only in accordance with the following claims and their equivalents.

CLAIMS
We claim:

1. A surveillance system comprising:
a plurality of surveillance cameras connected to a VMS (Video Management System) through one or more networks;
one or more local terminals connected to said networks; and
one or more remote terminals connected to said networks through an Internet;
each of said plurality of surveillance cameras designed for operating in environments with power and network disruptions 
wherein said surveillance cameras  in the case of a power disruption  reduce the power consumption and in the case of a network disruption  reduce the storage requirements and the network bandwidth requirements.

2. The surveillance system of claim 1  wherein said reduction in power consumption is achieved by stopping one or more of software and hardware components.

3. The surveillance system of claim 2  wherein the power consumption is reduced to 5.4% to 6.9% of the normal mode.

4. The surveillance system of claim 1  wherein said reduction in storage requirements results from adopting a dynamic frame rate.

5. The surveillance system of claim 4  wherein the storage requirements is reduced by 12% to 77%.

6. The surveillance system of claim 1  wherein said reduction in bandwidth requirements is achieved by reducing the data rate of the content streams through DCR (Dynamic Codec Reconfiguration)  whereby the codecs used in the surveillance camera are reconfigured without interrupting the encoding operation.

7. The surveillance system of claim 6  wherein the storage of the content in the surveillance camera continues at the set resolution and is transferred to said VMS over a lossless channel when the network resources are restored.

8. A surveillance camera designed for operating in environments with power and network disruptions comprising:
a lens enclosure with a lens assembly for focusing a scene 
a sensor array for converting said scene to digital values 
a processor for receiving and processing said digital values and analyzing power and network statuses 
a RAM for storing program and/or data 
a storage for storing content and instructions 
a network interface for providing network connectivity to a network 
an other interfaces for interfacing other sensors  and
a power supply coupled with power control and battery for providing power to said surveillance camera 
wherein said surveillance camera  in the case of a power disruption  reduces the power consumption and in the case of a network disruption  reduces the storage requirements and the network bandwidth requirements.

9. The surveillance camera of claim 8  wherein said reduction in power consumption is achieved by stopping one or more of software and hardware components.

10. The surveillance camera of claim 9  wherein the power consumption is reduced to 5.4% to 6.9% of the normal mode.

11. The surveillance camera of claim 8  wherein said reduction in storage requirements results from adopting a dynamic frame rate.

12. The surveillance camera of claim 11  wherein the storage requirements is reduced by 12% to 77%.

13. The surveillance camera of claim 8  wherein said reduction in bandwidth requirements is achieved by reducing the data rate of the content streams through DCR (Dynamic Codec Reconfiguration)  whereby the codecs used in the surveillance camera are reconfigured without interrupting the encoding operation.

14. The surveillance camera of claim 13  wherein the storage of the content in the surveillance camera continues at the set resolution and is transferred to a VMS over a lossless channel when the network resources are restored.

15. A surveillance camera substantially as herein described and illustrated in the figures of the accompanying drawings.

16. A method of prolonging the power availability from backup source in a surveillance camera by reducing the power consumption through identifying and stopping one or more of software and hardware components  said method comprising the steps of:
checking whether the remaining battery capacity is less than a first predetermined level and a second predetermined level;
changing the surveillance camera to a low power mode when the remaining battery capacity is less than the first predetermined level; and
changing the surveillance camera to a ultra low power mode when the remaining battery capacity is less than the second predetermined level.

17. The method of claim 16  wherein one or more of web server  streaming server  networking subsystem  video out signal  VPBE (Video Processing Back End) are stopped and only one encoder and encoding pipeline continue to operate in the low power mode.

18. The method of claim 17  wherein the video resolution and the frame rate and the system clock are reduced in the ultra low power mode.

19. A method of increasing the duration of storage in a surveillance camera by adopting a dynamic frame rate  said method comprising the steps of:
checking whether the scene has changed in a pre determined number of successive frames; and
storing only one frame in a pre determined time interval  when the scene has not changed in said pre determined number of successive frames.

20. A method of continuing to make available the content from a surveillance camera during network outages  said method comprising the steps of:
computing periodically an available network bandwidth;
determining whether said available network bandwidth is sufficient to support the current one or more content streams from said surveillance camera;
on determining said available network bandwidth is not sufficient  reducing the rate of the content stream to match the said available network bandwidth; and
continuing to store the content in the surveillance camera without any change in the rate of the content.

21. The method of claim 20  wherein one or more of resolution and frame rate of the content are reduced to cause said reducing the rate of the content stream.

22. The method of claim 21  wherein said one or more of resolution and frame rate are changed through DCR (Dynamic Codec Reconfiguration).

Documents

Application Documents

# Name Date
1 Priority Document.pdf 2011-12-20
2 Power of Authority.pdf 2011-12-20
3 Form-5.pdf 2011-12-20
4 Form-3.pdf 2011-12-20
6 Drawings.pdf 2011-12-20
7 SSI CERTIFICATE.pdf 2014-05-19
8 4402-CHE-2011-FER.pdf 2018-11-27
9 4402-CHE-2011-RELEVANT DOCUMENTS [27-05-2019(online)].pdf 2019-05-27
10 4402-CHE-2011-PETITION UNDER RULE 137 [27-05-2019(online)].pdf 2019-05-27
11 4402-CHE-2011-OTHERS [27-05-2019(online)].pdf 2019-05-27
12 4402-CHE-2011-FORM 13 [27-05-2019(online)].pdf 2019-05-27
13 4402-CHE-2011-FER_SER_REPLY [27-05-2019(online)].pdf 2019-05-27
14 4402-CHE-2011-DRAWING [27-05-2019(online)].pdf 2019-05-27
15 4402-CHE-2011-COMPLETE SPECIFICATION [27-05-2019(online)].pdf 2019-05-27
16 4402-CHE-2011-CLAIMS [27-05-2019(online)].pdf 2019-05-27
17 4402-CHE-2011-ABSTRACT [27-05-2019(online)].pdf 2019-05-27
18 4402-CHE-2011-Proof of Right (MANDATORY) [17-06-2019(online)].pdf 2019-06-17
19 Correspondence by Agent_Notarized Copy_21-06-2019.pdf 2019-06-21
20 4402-CHE-2011-Correspondence to notify the Controller [19-03-2021(online)].pdf 2021-03-19
21 4402-CHE-2011-FORM-26 [24-03-2021(online)].pdf 2021-03-24
22 4402-CHE-2011-Written submissions and relevant documents [08-04-2021(online)].pdf 2021-04-08
23 4402-CHE-2011-PatentCertificate30-06-2021.pdf 2021-06-30
24 4402-CHE-2011-IntimationOfGrant30-06-2021.pdf 2021-06-30
25 4402-CHE-2011-US(14)-HearingNotice-(HearingDate-25-03-2021).pdf 2021-10-03
26 4402-CHE-2011-RELEVANT DOCUMENTS [16-08-2023(online)].pdf 2023-08-16

Search Strategy

1 search_05-06-2018.pdf

ERegister / Renewals

3rd: 20 Sep 2021

From 15/12/2013 - To 15/12/2014

4th: 20 Sep 2021

From 15/12/2014 - To 15/12/2015

5th: 20 Sep 2021

From 15/12/2015 - To 15/12/2016

6th: 20 Sep 2021

From 15/12/2016 - To 15/12/2017

7th: 20 Sep 2021

From 15/12/2017 - To 15/12/2018

8th: 20 Sep 2021

From 15/12/2018 - To 15/12/2019

9th: 20 Sep 2021

From 15/12/2019 - To 15/12/2020

10th: 20 Sep 2021

From 15/12/2020 - To 15/12/2021

11th: 10 Dec 2021

From 15/12/2021 - To 15/12/2022

12th: 10 Dec 2021

From 15/12/2022 - To 15/12/2023

13th: 10 Dec 2021

From 15/12/2023 - To 15/12/2024

14th: 10 Dec 2021

From 15/12/2024 - To 15/12/2025

15th: 10 Dec 2021

From 15/12/2025 - To 15/12/2026

16th: 10 Dec 2021

From 15/12/2026 - To 15/12/2027

17th: 10 Dec 2021

From 15/12/2027 - To 15/12/2028

18th: 10 Dec 2021

From 15/12/2028 - To 15/12/2029

19th: 10 Dec 2021

From 15/12/2029 - To 15/12/2030

20th: 10 Dec 2021

From 15/12/2030 - To 15/12/2031