Sign In to Follow Application
View All Documents & Correspondence

A Maritime Monitoring System Using Interchangeable Video Processing Modules And A Method Thereof

Abstract: A system and a method for maritime monitoring using video processing methods, more specifically involving video processing modules which are interchangeable for maritime monitoring system is disclosed. It includes a process of maritime surveillance using video processing modules such as the video stabilization module (22), the video enhancement module (23), the single target tracking module (24)and the panorama generation module (25)with day vision as well as night vision sensors. A method is implemented to monitor the maritime environment using an image sensor mounted on a pedestal at remote stations and the processed video information is transmitted to the control center (4). An architecture to realize maritime monitoring system is disclosed with video processing methods initiated by the operator in an interchangeable manner.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
31 March 2022
Publication Number
40/2023
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

BHARAT ELECTRONICS LIMITED
Outer Ring Road, Nagavara, Bangalore – 560045, Karnataka, India

Inventors

1. Jyotheswar Jettipalli
Member Senior Research staff, AI Division, Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore-560013, Karnataka, India
2. Kavita Shrivastava
Member Research staff, AI Division, Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore-560013, Karnataka, India
3. Divya Nagaraja Reddy
Member Research staff, SSP Division, Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore-560013, Karnataka, India
4. Chaveli Ramesh
Principal Scientist, SP Division, Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore-560013, Karnataka, India

Specification

DESC:TECHNICAL FIELD
The present invention relates generally to a system and a method for maritime monitoring using video processing methods, more specifically involving video processing modules which are interchangeable for maritime monitoring system.
BACKGROUND

Few of the conventional technologies disclose about different methods and systems for coastal surveillance including threat detection and warning for detected threat to maritime commercial assets, coast line security.
US8581688B2 titled “Coastal Monitoring Techniques” discloses method and system for monitoring coastlines that includes sensor arrangement for coastline, data collection for monitoring via sensors, analysing the collected data to detect the presence of any threat near coastline, and transmission of analysed data or signal to command and monitoring centre with sensor identification and location. The arrangement of sensors may be of any order and spaced apart but is programmed to power-on on detection of specified condition in order to collect data. The location of all sensors is provided by monitoring centre or each sensor maybe arranged to determine its location so that each sensor provides its location when transmitting data or the signal to the monitoring facility.
US10121078B2 titled “Methods and System for detection of foreign objects in maritime environments” discloses the techniques for detecting foreign objects in a region of interest in maritime environments. Image data is the collection of sequence of images of the region of interest. This image data is processed, and user’s point of interest is detected by means of target detection in successive frames. This detection is achieved by spatio-temporal correlation between the points in grouped data and determining the corresponding track function of data. This above-mentioned step enables the detection of foreign object present in maritime environment under observation.
US10885381B2 titled “Ship detection method and system based on multidimensional scene features” discloses a method and system for ship detection based on multidimensional scene features, this method includes: constructing a ship image sample database , and extracting all the edges of each frame of image to act as a fourth dimension of the image; extracting a coastline to make a sea surface area be a ship area; constructing a Faster RCNN - like Convolutional network to act as a deep learning network, and inputting sample data into the deep learning network; constructing an RPN network, using a sliding window to generate region proposal boxes of different sizes in the ship area, combining the region proposal boxes with the deep learning network, and training a model according to an actual position of a ship; and performing ship detection on a part of the detected image between the coastline on the basis of the trained model.
WO 2009/066988A2 titled “Device and method for surveillance system” discloses A device and method for a surveillance system utilizing a wide view camera serves as guiding system to determine the area of interest i.e., the area where the presence or absence of activities within the area that is being monitored, together with a high resolution camera which is controlled in pan, tilt and zoom which provides a zoomed image of the area of interest.
US 2014/0314270 A1 titled “Detection of floating objects in maritime video using a mobile camera” discloses a method and system for detecting floating objects in maritime video. The horizon is detected within the video. Modelling of the sky and water is performed on the video. Objects are detected that are not water and sky within the video.
There is still a need of a technical solution which solves the above defined problems and provide a system and a method for maritime monitoring using video processing methods, more specifically involving video processing modules which are interchangeable for maritime monitoring system.

SUMMARY
This summary is provided to introduce concepts related to generally to maritime monitoring using video processing methods. The invention more specifically involves video processing modules which are interchangeable for maritime monitoring system.
In an embodiment of the present invention, a maritime monitoring system using interchangeable video processing modules is disclosed. The system includes a plurality of image sensors capture a continuous video sequence. The plurality of image sensors are mounted on a pedestal and located at a remote station. Further, the system includes a video stabilization module which stabilizes the captured video sequence. This video sequence includes a global motion of a plurality of the consecutive images which is estimated in terms of motion vectors. Further, the video stabilization module corrects the trajectory of the motion vectors by applying predictive filtering. The system further includes a video enhancement module to remove noise and increase the visibility of the stabilized input video sequence by applying haze removal, contrast enhancement and gamma correction sequentially. Further, the system includes a single target tracking module to track a plurality of naval targets continuously by directing the pedestal which in turn finds the target direction and track positions of the target. The system includes a panorama generation module to generate a panorama in a sector view mode or a 360 degree view mode by stitching the consecutive images of the enhanced input video sequence using image blending.
In an embodiment of the present invention, a maritime monitoring method using interchangeable video processing modules is disclosed. The method includes capturing a continuous video sequence by a plurality of image sensors. The plurality of image sensors are mounted on a pedestal and located at a remote station. Further, the method includes stabilizing the captured video sequence by a video stabilization module. This video sequence includes a global motion of a plurality of the consecutive images which is estimated in terms of motion vectors. Further, the method includes correcting the trajectory of the motion vectors by the video stabilization module by applying predictive filtering. The method further includes removing a noise and increasing the visibility of the stabilized input video sequence by a video enhancement module by applying haze removal, contrast enhancement and gamma correction sequentially. Further, the method includes tracking a plurality of naval targets continuously by a single target tracking module by directing the pedestal which in turn finds the target direction and track positions. The method includes generating a panorama in a sector view mode or a 360 degree view mode by a panorama generation module by stitching the consecutive images of the enhanced input video sequence using image blending.

BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
The detailed description is described with reference to the accompanying figures.
Figure 1 illustrates a pictorial view of maritime monitoring system, in accordance with an exemplary embodiment of the present invention.
Figure2 represents a pictorial view of the sub-system modules along with their communication links and its network architecture, in accordance with an exemplary embodiment of the present invention.
Figure 3 illustrates a block diagram of video-processing modules present in maritime monitoring system, in accordance with an exemplary embodiment of the present invention.
Figure 4 illustrates a flowchart of video stabilization process, in accordance with an exemplary embodiment of the present invention.
Figure 5 illustrates a flowchart of video enhancement process, in accordance with an exemplary embodiment of the present invention.
Figure 6 illustrates a flowchart of single target tracking process, in accordance with an exemplary embodiment of the present invention.
Figure 7 illustrates a flowchart of panorama generation module, in accordance with an exemplary embodiment of the present invention.
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative methods embodying the principles of the present invention. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

DETAILED DESCRIPTION
The various embodiments of the present disclosure describe about a system and a method for maritime monitoring using video processing methods.
In the following description, for purpose of explanation, specific details are set forth in order to provide an understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these details. One skilled in the art will recognize that embodiments of the present invention, some of which are described below, may be incorporated into a number of systems.
However, the apparatuses and methods are not limited to the specific embodiments described herein. Further, structures and devices shown in the figures are illustrative of exemplary embodiments of the present invention and are meant to avoid obscuring of the present invention.
Furthermore, connections between components and/or modules within the figures are not intended to be limited to direct connections. Rather, these components and modules may be modified, re-formatted or otherwise changed by intermediary components and modules.
The appearances of the phrase “in an embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
It should be noted that the description merely illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present invention. Furthermore, all examples recited herein are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
In an embodiment of the present disclosure, a system and a method for maritime monitoring using video processing methods is disclosed. The maritime monitoring system includes a video sensor (CCD/TI), which is mounted on a pedestal, captures the video. The captured video is processed in a computing device called central processing unit (CPU). The video sensor with pedestal and computing device are located at a light house/remote station (RS). The commands for selection of video processing elements are issued from the sensor management software application residing at Control Center (CC). Based on the mode of command, video processing element performs the operation and transmits the processed video data to the CC. The processed/improved video updates on the screen with enhancements and target information to the operator at Control Center (CC).
In another embodiment of the present disclosure, a system and a method for maritime surveillance using video processing based techniques is disclosed. This method and system includes video processing elements such as video-based stabilization, video enhancement by improving contrast and removing haze using atmospheric corrections in an image scene/sequence, single target tracking for CCD as well as thermal imager (TI) videos. Each module works independently and can be interchange by module selection by the operator at CC. Apart from these methods, a panorama generation method is disclosed wherein the panorama generation method aligns the images using image blending approach in sector view mode/ 360 degree view mode. A video encoder compresses the processed video data and transmits through the real-time streaming protocol to Control Center (CC) over ethernet for monitoring.
In another embodiment of the present disclosure, the video stabilization process is based on optical flow and predictive filtering which is developed for naval scenario. The global motion of consecutive frames is estimated in terms of motion vectors and a novel filter-based correction method is implemented to correct the trajectory of motion vectors generated by optical flow method. An image warping or image transformation process provides a stable view of scene. It compensates camera platform vibration due to wind and camera pedestal movement. This process ensures the robust and real time performance of the video stabilization process.
In another embodiment of the present disclosure, the video enhancement consists of three modules called Haze/Fog removal, contrast improvement using adaptive histogram equalization and gamma Correction. These modules are applied sequentially on an input video sequence. A modified dark channel prior approach is used to remove the haze/ fog in a single image and then applied a contrast improvement approach to improve the contrast of the image. Finally, a gamma correction approach is applied to enhance the colors in an image. It is therefore suitable for improving the local contrast and enhancing the definitions of edges in each region of an image. These operations are controlled by the operator with the help of threshold control scroll bar from Control Center (CC), accordingly improved/enhanced video updates on the screen.
In another embodiment of the present disclosure, the single target tracking is implemented to track the naval targets like ships, vessels and boats. The tracking module generates the target position or the pixel error to sensor management software to move the pedestal accordingly. An optical flow-based object tracking algorithm and prediction filter is used to find the target direction and track positions. Strong corners of the target are determined and used as flow points to implement target tracking on a central processing unit (CPU). Based on directed pan/tilt motion and pixel-to-pixel relationship are used to estimate whether optical flow points fit background motion, dynamic motion or noise. To smooth variation, a two-dimensional position and velocity vector is used with a prediction filter to predict the next required position of the camera, so the object stays in the centre of the video. The acquired images are processed in parallel resulting in a high frame rate processing of images, the results provide real-time tracking of targets on a coast-line monitoring system using a pan/tilt unit for generic moving targets where no training is required, and the camera motion is observed from high accuracy approach opposed to image correlation.
In another embodiment of the present disclosure, the panorama generation module forms a panoramic view of captured images in a sector view mode or 360 degree view mode. The camera mechanism is mounted on a fixed pedestal, which will rotate in clock wise and anti-clock wise directions with a fixed rotating speed (rpm) and fixed zoom level. The rotating speed and zoom levels can be changed by the operator controls. Once the speed and zoom is set to a value then initiation of panorama generation takes place as per mode selection (sector mode/360 degree view mode). This module generates a pixel shift parameter for each scenario of pedestal speed and camera zoom. With generated pixel shift, image blending approach is used to stitch the captured images to form a panorama in sector view mode/360 degree view mode.
Figure 1 illustrates a pictorial view of maritime monitoring system, according to an exemplary implementation of the present disclosure. The system comprises of imaging sensors (2) such as CCD camera for day vision and Thermal Imager (TI) camera for night vision are mounted on a pedestal, which is installed on a light house. The camera feeds and pedestal controls are connected to a central processing unit present at a remote station (1). The processed video information is transmitted over an Ethernet (3) to sensor management software application residing at a control center (CC) (4). The system further consists of a coastal land (5) and naval targets (6) like ships, boats and vessels in a water body (7). One of the key features of this system is that its process is feasible and its software is reliable and robust.
Figure 2 represents a pictorial view of the sub-system modules along with their communication links and its network architecture, according to an exemplary implementation of the present disclosure. In this figure, the imaging sensors such as TI and CCD sensors are mounted on pan and tilt pedestal (11). The data captured by the imaging sensors are transmitted to a central processing unit (13) by a coaxial link (12). The image sensors with the pedestal (11) and the central processing unit (13) are installed at the remote station (1). The central processing unit (13) processes the captured video data and transmits through ethernet communication link (14) to sensor management software application running on a work station (15) residing at the control center (CC) (4). The sensor management software application sends the commands for pedestal movement with respect to the pan and tilt. The pedestal controls are also communicated by the ethernet communication link (14) as shown in this figure 2.
Figure 3 illustrates a block diagram of video-processing modules present in maritime monitoring system, according to an exemplary implementation of the present disclosure. This maritime monitoring system comprises a capturing element which is mounted on a pedestal (21) to acquire the video sequence. The captured video sequence is fed to the video processing modules such as video stabilization (22), video enhancement (23), single target tracking (24) and panorama generation (25). These modules are initiated as per operator’s selection using sensor management software application from control center (CC) (4). The video processing modules can be performed in an interchangeable manner as commands received from the sensor management software application on ethernet communication link. The system has one exception that only one module among video stabilization and single target tracking should be performed on input video sequence at the same time. This way of operation provides more robustness and stability to the system while operating in interchangeable manner of video processing modules.
In an aspect of the present embodiment, the image capturing element (21) can be configured to capture the frames from Thermal Imager or CCD sensor based on the input from the sensor management software application. The selected sensor video signal data is transmitted to the sensor management software using Ethernet network link. Based on the inputs received from the sensor management software for the selection of video processing modules like a video stabilization module (22), a video enhancement module (23), a single target tracking module (24) and a panorama generation module (25) will be performed on an input video sequence captured by the sensor at remote station (RS). These video processing modules performs their operation sequentially as selected by the operator. While generating panorama, video enhancement selection module feature can be enabled by the operator to view the better quality video sequence on the sensor management software application terminal.
Figure 4 illustrates a flowchart of video stabilization process, according to an exemplary implementation of the present disclosure. The video stabilization module consists of a corner detection (32), an optical flow corner tracker (33), a hybrid motion estimation (34), a parameter trajectory (35) and an image warping (36) functions to perform the video stabilization process. The proposed system is capable of processing the modules interchangeably. Hence, the input to the video stabilization module is termed as captured/processed input video sequence (31). The corner detection function (32) finds the features/corners in the present image and these features/corner points are used to track the features in the consecutive/next frame. The optical flow corner tracker (33) computes the apparent motion or optical flow between the consecutive frames using the detected features/corner points in the previous frame and present frame. The hybrid motion estimation function (34) computes the motion flow transformation parameters (dx, dy and da (angle)) between the consecutive frames based on good feature to track method. The parameter trajectory function (35) accumulates the transformation parameters dx, dy and da to get the trajectory for x, y and angle at each frame. These trajectories are smoothed using predictive filtering approach to find the global transformation parameters. The image warping function (36) is applied on the current frame using global transformation parameters. This process gives the robust and stable video for further video processing modules like the video enhancement module (23), the single target tracking module (24) and the panorama generation module (25) or to transmit the stabilized video to the control center (CC) (4).
Figure 5 illustrates a flowchart of video enhancement process, according to an exemplary implementation of the present disclosure. The video enhancement module comprises of three functionalities like haze removal, contrast enhancement and gamma correction methods. The proposed system is capable of processing the modules interchangeably. Hence, the input to the video enhancement module is termed as captured/processed input video sequence (41). The haze removal process computes the dark channel (42) on an input image by applying minimum filter with a block size of 15 x 15. This provides the dark channel of the input image. The atmospheric light is estimated (43) by averaging the pixels in the original image that corresponds to the top lightest 0.1% in the dark channel. The estimation of transmission maps (44) are generated by assuming the transmission in the local patch of the image is constant. This process uses a value/parameter for how much haze to keep in the image in order to not remove the human ability to perceive depth. From the initial transmission calculated in the previous block (44), a refined transmission map using image matting (45) is calculated by looping over all possible 3x3 windows in the image. The scene radiance (J) (46) is recovered using the equation given below by substituting the estimated transmission map (t) (44) and the estimated atmospheric light (A) (43). In order to avoid noisy recovered scene radiance, a lower bound, t0 on t is set. The final scene radiance (46) is recovered by using the following equation,
J(x)=(I(x)-A)/(max?(t(x),t_0))+A
The above mentioned process ensures haze removal operation on an input video sequence. Further, to enhance the quality of the input video sequence two more functionalities named contrast enhancement (47) and gamma correction (48) were applied on haze removed video sequence.
In an aspect of the present embodiment, the input video sequence processed for haze removal is shown in this figure 5. The processed/haze removed video sequence is fed to the contrast improvement (47) functionality, which applies histogram equalization to a contextual region. In this process, the original histogram is clipped and the clipped pixels are redistributed to each gray level. The new histogram is different from the ordinary histogram because the intensity of each pixel is limited to a user-selectable maximum. So, this approach can limit the enhancement of noise. Further, the contrast enhanced video sequence is processed for color enhancement by using gamma correction method (48). This function works by the principle of the output is proportional to the input raised to the power of gamma ?. The gamma correction is applied on an input image by using the equation given below:
I^'=255*(I/255)^?
Where I is input image, I^'is gamma corrected image and Gamma, represented by the Greek letter ?, can be described as the relationship between an input and the resulting output. The input will be the RGB intensity values of an image. Gamma can be any value between 0 and infinity. If gamma is 1 (the default), the mapping is linear. If gamma is less than 1, the mapping is weighted toward higher (brighter) output values. If gamma is greater than 1, the mapping is weighted toward lower (darker) output values. Finally, this video enhancement module as shown in Figure 5 ensures the output video sequence (49) is enhanced with all aspects like haze, contrast and color information.
Figure 6 illustrates a flowchart of single target tracking process, according to an exemplary implementation of the present disclosure. The output of this module controls the pedestal (11) where the imaging sensors are installed and ensures the selected/moving target in the center of the frame. The single target tracking module (24) is enabled by the operator through sensor management application from the control center (CC) 4, this initiates (51) the tracking module in the central processing unit located at remote station (RS) (13) and captures the consecutive frames (53) from the input video sequence (52). Further, it detects the good feature points like corners, edges and texture (54) in the initial captured image/previous image by using these feature points find the optical flow (55) in the current frame. Subtracting the obtained feature point positions with previous image feature locations gives the background pixel movement (56). Finding the moving target in the current image (57) by eliminating the outliers using thresholding of feature point’s displacement. Obtaining the centroid of the moving target (58), this ensures the moving target positions. The centroid location of the moving target is predicted using predictive filtering approach (59). This process ensures the accuracy of the tracking approach and directs the pedestal (11). The dead zone check function (60) keeps the centroid of the moving target within a rectangular dead zone, otherwise reference position of the moving target is adjusted based upon the dead zone. The accurate moving target position is obtained by passing the input video sequence to the functional blocks above mentioned. This process further directs the pedestal (61) to move the camera towards moving target direction. The single target tracking module works iteratively to track the moving target automatically without manual intervention. Finally, the output video sequence (62) with the bounding box is transmitted to the control center (4) to monitor the moving target continuously.
In an aspect of the present embodiment, the disclosed single target tracking module processes the input video sequence and directs the imaging sensor mounted on a pedestal (11) to track the selected/moving target efficiently without manual intervention. The operator manning at control center (15) initiates the tracking module by selecting the preferred target, this enables the tracking functionality to perform at remote station (13) and transmits the processed video sequence to the work station located at control center (15) through the ethernet communication link (14). The single target tracking functionality works in a closed loop manner to track the selected target continuously by directing the pedestal. The panorama generation module (25) as referred in another embodiment as to see the sector/360 degree view around camera.
Figure 7 illustrates a flowchart of panorama generation module, according to an exemplary implementation of the present disclosure. It maps camera movement in the form of sector coverage with pixel coverage using pixel shifts in the look-up table (LUT). This look-up table is used during the stitching process to avoid overlapped areas. The gradient domain image stitching is employed for panorama generation. The panorama is formed with respect to the first frame in clock wise or anti-clock wise direction. These panorama formations are available to the operator either in sector mode or 360 degree mode. Seamless panorama will be generated using image blending approach with the look-up-table parameters based on each mode of operation.
In an aspect of the present embodiment, the disclosed panorama generation module is enabled by the operator located at control center. This initiates the panorama generation function (71) residing at remote station and this function requires a viewing mode as input from the operator to create a panorama in sector view or 360 degree view around the camera. Based on the pedestal control parameters (72), the pixel shift is updated from the Look up table (LUT) (74) and the input video sequence (73) is acquired for the stitching process. The consecutives images are stitched together by applying image blending method (75). The stitching process is repeated iteratively to obtain the panorama for the sector or 360 degree view. At every end of position of the pedestal, generated panorama will be updated on the sensor management software application installed at control center.
In one of the exemplary implementation, a system for monitoring the maritime environment by applying video processing approaches interchangeably is disclosed. The system comprises an image sensor (CCD & TI) module configured to capture the continuous video sequence, which is mounted on a pedestal and located at remote station. A video stabilization module is configured to stabilize the unstable input video sequence, i.e., global motion of consecutive frames is estimated in terms of motion vectors and a novel filter-based correction method is applied to correct the trajectory of motion vectors. A video enhancement module is configured to remove the noise and improves the input video sequence visibility by applying haze removal, contrast enhancement and gamma correction sequentially. A single target tracking module is configured to track the naval targets like ship, boat and vessel continuously by directing the pedestal, which finds the target direction and track positions. A panorama generation module is configured to generate a panorama in a sector view mode or 360 degree view mode by stitching the consecutive frames using image blending approach.
In another of the exemplary implementation, the present disclosure provides a system for maritime monitoring having an image sensor mounted on a pedestal, central processing unit positioned at remote station, work station located at control center and a communication link between the sub-systems. This system comprises an image sensor with pedestal installed on a light house to capture the video sequence of maritime environment. A captured video sequence is transmitted over coaxial link to central processing unit located at remote station. A video processing modules like video stabilization, video enhancement, single target tracking and panorama generation are applied on captured input video sequence and transmitted to control center over Ethernet link. A work station located at control center displays the processed video sequence as an output to the operator on sensor management software and the video processing modules of maritime monitoring system are initiated and controlled by the operator from control center.
In another of the exemplary implementation, the video processing modules like video stabilization, video enhancement, single target tracking and panorama generation are configured to perform in interchangeable manner by the operator’s preference, this ensures the flexibility in selection of video processing module to apply on the input video sequence.
In another of the exemplary implementation, the video processing modules such as video stabilization module, video enhancement, single target tracking and the panorama generation are configured to perform its operation on an input video sequence independently. All the modules are not dependent each other to perform their functions on the input video sequence.
In another of the exemplary implementation, a method for maritime monitoring system using interchangeable video processing modules comprises steps of capturing input video sequence of maritime environment using imaging sensor mounted on a pedestal which is located at remote station. A video processing methods such as video stabilization, video enhancement, single target tracking and panorama generation are applied on captured input video sequence based on operator’s selection from the control center. The selected video processing methods are performed on input video sequence sequentially and interchangeably on central processing unit located at remote station based on operators choice and processed output video sequence is transmitted through Ethernet communication link to the work station located at control center.
In another of the exemplary implementation, an improved method for maritime surveillance comprising the steps to stabilize the input video sequence which further comprises of steps of detecting the corner points in an input video sequence, tracking the detected corner points using optical flow between the consecutive frames. Estimating the motion parameters in x direction, y direction and with respect to the angle. Finding the trajectory of the parameters and smooth out the trajectory using predictive filtering approach. Applying the transformation using image warping method on current frame and repeating the process for all the input video sequence and display the output video sequence.
In another of the exemplary implementation, an improved method for maritime surveillance comprising of steps to enhance the input video sequence which further comprises of steps of computing the dark channel prior with block size of 15 x 15 on an input image. Estimating the atmospheric light in an input image that correspond to the top lightest 0.1% in the dark channel. Estimating the transmission maps by assuming the transmission in a local patch is constant. Refining the transmission maps using image soft matting approach with 3 x 3 block size. Recovering the scene radiance by eliminating the noise. Applying the adaptive histogram equalization method by limiting the contrast on each pixel of an input image with respect to the center of the contextual region and enhancing the color information by applying gamma correction method on an input video sequence.
In another of the exemplary implementation, an improved method for maritime surveillance comprising of steps to track the naval targets in an input video sequence which further comprises of steps of initializing the parameters of tracking module and capture the input video sequence. Capturing the current image and store the previous frame. Detecting the good features like corners, edges and texture in the previous image. Calculating the optical flow between previous image and current image with respect to the good features in the previous image. Estimating the background pixel movement using optical flow which gives the displacements. Finding the moving object in the scene by thresholding the background pixel movement and good features displacements. Determining the centroid of moving object. Predicting the centroid of moving object in the next image using predictive filtering method. A dead zone check is implemented based on an acceptable bound of the object in an image. Updating and directing the pan and tilt coordinates of the pedestal to keep the moving object in the center of the frame and iterate the above steps and displaying the output video sequence.
In another of the exemplary implementation, an improved method for maritime surveillance comprising of steps to generate a panorama in sector view mode or 360 degree view mode using pedestal which further comprises of steps of initializing the panorama generation module and pedestal parameters. Capturing the input video sequence from the image sensor which is mounted on a pedestal. Acquiring the current frame and update the pixel shift from the look up table based on pedestal parameters. Applying the image blending approach to stitch the consecutive images based on pixel shift and repeating the above steps to form the panorama as initiated by the operator.
In another of the exemplary implementation, the video processing methods like video stabilization, video enhancement, single target tracking and panorama generation are configured to perform in interchangeable manner by the operator’s preference by cascading each other and these methods carry out its functionalities independently without depending on each other.
The present invention has the advantages such as robust to intensity variations, accurate, less complexity and fast, modules are interchangeably used as per operator selection, application in maritime security monitoring.
The foregoing description of the invention has been set merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to person skilled in the art, the invention should be construed to include everything within the scope of the invention.
,CLAIMS:
1. A maritime monitoring system using interchangeable video processing modules, the system comprising:
a plurality of image sensors(2) configured to capture a continuous video sequence, wherein the plurality of image sensors (2) are mounted on a pedestal and located at a remote station (1);
a video stabilization module (22) configured to:
stabilize the captured video sequence, wherein the video sequence includes a global motion of a plurality of the consecutive images which is estimated in terms of motion vectors; and
correct the trajectory of the motion vectors by applying predictive filtering;
a video enhancement module (23) configured to remove noise and increase the visibility of the stabilized input video sequence by applying haze removal, contrast enhancement and gamma correction sequentially;
a single target tracking module (24) configured to track a plurality of naval targets continuously by directing the pedestal which in turn finds the target direction and track positions; and
a panorama generation module (25) configured to generate a panorama in a sector view mode or a 360 degree view mode by stitching the consecutive images of the enhanced input video sequence using image blending.

2. The system as claimed in claim 1, wherein the plurality of image sensors (2) include a charged-coupled device (CCD) camera for day vision and a Thermal Imager (TI) camera for night vision.

3. The system as claimed in claim 1, wherein the plurality of naval targets includes ship, boat, vessel and the like.

4. The system as claimed in claim 1, wherein the plurality of image sensors (2) mounted on a pedestal and located at a remote station (1) further includes a central processing unit (13) configured to:
receive the captured video sequence by a communication link (3, 12, 14);
process the captured video sequence; and
transmit the processed video sequence through the communication link (14) to a sensor management display running on a workstation (15)located at a control center (4).

5. The system as claimed in claim 1, wherein the video processing modules such as the video stabilization module (22), the video enhancement module (23), the single target tracking module (24)and the panorama generation module (25)are configured to perform their operation sequentially and interchangeably based on an operator’s selection located at the control center (4).

6. The system as claimed in claim 1, wherein the video processing modules such as the video stabilization module (22), the video enhancement module (23), the single target tracking module (24)and the panorama generation module (25)are configured to perform their operation on the captured video sequence independently.

7. The system as claimed in claim 1, wherein the video stabilization module (22) configured to stabilize the captured video sequence further comprises:
a corner detection module (32) configured to detect a plurality of corner points in the input video sequence (31);
an optical flow corner tracker (33) configured to track an optical flow between the consecutive images by using the detected corner points;
a hybrid motion estimator (34) configured to estimate a plurality of motion flow transformation parameters in x direction, y direction and an angle between the consecutive images;
a parameter trajectory module (35) configured to:
find the trajectory of the estimated motion flow transformation parameters; and
smooth out the trajectory by predictive filtering to provide global transformation parameters;
an image warping module (36) configured to warp the present image of the consecutive images based on the global transformation parameters; and
display the output video sequence (37).

8. The system as claimed in claim 1, wherein the video enhancement module (23) configured to remove the noise and increase the visibility of the input video sequence is further configured to:
compute a dark channel on the input image of the input video sequence (31)by applying a filter with a block size of 15 x 15 (42);
estimate an atmospheric light in the input image that correspond to the top lightest 0.1% in the dark channel (43);
estimate a plurality of transmission maps when the transmission in a local patch of the input image is constant (44);
refine the estimated transmission maps using image soft matting with 3 x 3 block size (45);
recover a scene radiance by eliminating the estimated transmission maps and the estimated atmospheric light to provide the processed input video sequence for haze removal (46);
apply an adaptive histogram equalization by limiting the contrast on each pixel of the input image with respect to a center of a contextual region of the input image of the processed input video sequence (47); and
enhance the color information by applying the gamma correction on the contrast enhanced video sequence (48).

9. The system as claimed in claim 1, wherein the single target tracking module (24) configured to track the plurality of naval targets continuously is further configured to:
initialize the parameters of the single target tracking module (24) (51);
capture the consecutive images from the input video sequence (52);
capture the present image and store the previous image (53);
detect a plurality of good features such as the corners, edges and texture in the stored previous image (54);
calculate the motion flow between the previous image and the current image with respect to the plurality of good features in the previous image (55);
estimate a background pixel movement using the calculated motion flow to provide the displacement of the good features (56);
locate a moving object in a scene of the input video sequence by thresholding the estimated background pixel movement and the displacement of good features(57);
determine a centroid of the moving object (58);
determine a location of the centroid of the moving object in the next image using predictive filtering (59);
implement a deadzone check to maintain the centroid location within the acceptable boundary of the moving object in the next image (60);
update and direct the pedestal parameters to keep the moving object in the center of the image (61); and
display the output video sequence (62).

10. The system as claimed in claim 9, wherein the pedestal parameters include the pan and tilt coordinates.

11. The system as claimed in claim 1, wherein the panorama generation module (25) configured to generate the panorama is further configured to:
initialize the panorama generation module (25) and the pedestal parameters (71, 72);
capture the input video sequence from the plurality of image sensors mounted on the pedestal (73);
update a pixel shift from a look-up table based on the pedestal parameters (74);
acquire the present image from the input video sequence;
apply the image blending to stitch the plurality of the consecutive images based on the pixel shift (75); and
develop the panorama for the sector view mode or the360 degree view mode by repeating the stitching process iteratively (76).

12. A maritime monitoring method using interchangeable video processing modules, the method comprising:
capturing, by a plurality of image sensors(2),a continuous video sequence, wherein the plurality of image sensors (2) are mounted on a pedestal and located at a remote station (1);
stabilizing, by a video stabilization module (22), the captured video sequence, wherein the video sequence includes a global motion of a plurality of the consecutive images which is estimated in terms of motion vectors;
correcting, by a video stabilization module (22), the trajectory of the motion vectors by applying predictive filtering;
removing noise and increasing the visibility of the stabilized input video sequence by a video enhancement module (23) by applying haze removal, contrast enhancement and gamma correction sequentially;
tracking, by a single target tracking module (24), a plurality of naval targets continuously by directing the pedestal which in turn finds the target direction and track positions; and
generating, by a panorama generation module (25), a panorama in a sector view mode or a 360 degree view mode by stitching the consecutive images of the enhanced input video sequence using image blending.

13. The method as claimed in claim 12, wherein the plurality of image sensors (2) include a charged-coupled device (CCD) camera for day vision and a Thermal Imager (TI) camera for night vision.

14. The method as claimed in claim 12, wherein the plurality of naval targets includes ship, boat, vessel and the like.

15. The method as claimed in claim 12, wherein the plurality of image sensors (2) mounted on a pedestal and located at a remote station (1) further includes a central processing unit (13), said central processing unit (13) comprises:
receiving the captured video sequence by a communication link (3, 12, 14);
processing the captured video sequence; and
transmitting the processed video sequence through the communication link (14) to a sensor management display running on a workstation (15)located at a control center (4).

16. The method as claimed in claim 12, wherein stabilizing the captured video sequence by the video stabilization module (22)further comprises:
detecting, by a corner detection module (32), a plurality of corner points in the input video sequence (31); and
tracking, by an optical flow corner tracker (33), an optical flow between the consecutive images by using the detected corner points;
estimating, by a hybrid motion estimator (34), a plurality of motion flow transformation parameters in x direction, y direction and an angle between the consecutive images;
finding, by a parameter trajectory module (35), the trajectory of the estimated motion flow transformation parameters; and
smoothing out, by the parameter trajectory module (35), the trajectory by predictive filtering to provide global transformation parameters;
warping, by an image warping module (36), the present image of the consecutive images based on the global transformation parameters; and
displaying the output video sequence (37).

17. The method as claimed in claim 12, wherein removing a noise and increasing the visibility of the stabilized input video sequence by the video enhancement module (23), said module (23) further comprises:
computing a dark channel on the input image of the input video sequence (31) by applying a filter with a block size of 15 x 15 (42);
estimating an atmospheric light in the input image that correspond to the top lightest 0.1% in the dark channel (43);
estimating a plurality of transmission maps when the transmission in a local patch of the input image is constant (44);
refining the estimated transmission maps using image soft matting with 3 x 3 block size (45);
recovering a scene radiance by eliminating the estimated transmission maps and the estimated atmospheric light for providing the processed input video sequence for haze removal (46);
applying an adaptive histogram equalization by limiting the contrast on each pixel of the input image with respect to a center of a contextual region of the input image of the processed input video sequence (47); and
enhancing the color information by applying the gamma correction on the contrast enhanced video sequence (48).

18. The method as claimed in claim 12, wherein tracking the plurality of naval targets continuously by the single target tracking module (24),said module (24) further comprises:
initializing the parameters of the single target tracking module (24) (51);
capturing the consecutive images from the input video sequence (52);
capturing the present image and store the previous image (53);
detecting a plurality of good features such as the corners, edges and texture in the stored previous image (54);
calculating the motion flow between the previous image and the current image with respect to the plurality of good features in the previous image (55);
estimating a background pixel movement using the calculated motion flow to provide the displacement of the good features (56);
locating a moving object in a scene of the input video sequence by thresholding the estimated background pixel movement and the displacement of good features(57);
determining a centroid of the moving object (58);
determining a location of the centroid of the moving object in the next image using predictive filtering (59);
implementing a deadzone check to maintain the centroid location within the acceptable boundary of the moving object in the next image (60);
updating and directing the pedestal parameters to keep the moving object in the center of the image (61); and
displaying the output video sequence (62).

19. The method as claimed in claim 18, wherein the pedestal parameters include the pan and tilt coordinates.

20. The method as claimed in claim 12, wherein generating the panorama by the panorama generation module (25), said module (25) further comprises:
initializing the panorama generation module (25) and the pedestal parameters (71, 72);
capturing the input video sequence from the plurality of image sensors mounted on the pedestal (73);
updating a pixel shift from a look-up table based on the pedestal parameters (74);
acquiring the present image from the input video sequence;
applying the image blending to stitch the plurality of the consecutive images based on the pixel shift (75); and
developing the panorama for the sector view mode or the 360 degree view mode by repeating the stitching process iteratively (76).

Documents

Application Documents

# Name Date
1 202241019718-PROVISIONAL SPECIFICATION [31-03-2022(online)].pdf 2022-03-31
2 202241019718-FORM 1 [31-03-2022(online)].pdf 2022-03-31
3 202241019718-DRAWINGS [31-03-2022(online)].pdf 2022-03-31
4 202241019718-Proof of Right [14-06-2022(online)].pdf 2022-06-14
5 202241019718-FORM-26 [14-06-2022(online)].pdf 2022-06-14
6 202241019718-Correspondence_Form-1_20-06-2022.pdf 2022-06-20
7 202241019718-FORM 3 [17-03-2023(online)].pdf 2023-03-17
8 202241019718-ENDORSEMENT BY INVENTORS [17-03-2023(online)].pdf 2023-03-17
9 202241019718-DRAWING [17-03-2023(online)].pdf 2023-03-17
10 202241019718-CORRESPONDENCE-OTHERS [17-03-2023(online)].pdf 2023-03-17
11 202241019718-COMPLETE SPECIFICATION [17-03-2023(online)].pdf 2023-03-17
12 202241019718-POA [04-10-2024(online)].pdf 2024-10-04
13 202241019718-FORM 13 [04-10-2024(online)].pdf 2024-10-04
14 202241019718-AMENDED DOCUMENTS [04-10-2024(online)].pdf 2024-10-04
15 202241019718-Response to office action [01-11-2024(online)].pdf 2024-11-01