Abstract: Abstract An enhancement method for vision camera systems The invention relates to an image/signal processing for a device sensitive to wavelengths such as visible light and infrared radiation. In an embodiment, the processing includes converting an incident radiations into an electrical voltage by an image sensor which is a planar array of detectors, providing straight radiation path and focusing incident radiation on the image sensor by an optical system, receiving a digital data coming from image sensor produced by each of the individual detectors at a particular rate by a processing system and applying image enhancement mechanism after pre-processing and NUC filtering on the digital data. The image enhancement mechanism is configured to improve the quality of digital data coming from sensor and enable it to be displayed on low dynamic range monitors. Figure 2 (to be published)
Claims:We Claim:
1. An enhancement method for vision camera systems, the method comprising:
converting an incident radiations into an electrical voltage by an image sensor which is a planar array of detectors, wherein the electrical voltages are multiplexed and rehabilitated as a digital data by an integrated circuit;
providing straight radiation path and focusing incident radiation on the image sensor by an optical system;
receiving a digital data coming from image sensor produced by each of the individual detectors at a particular rate by a processing system; and
applying image enhancement mechanism after pre-processing and NUC filtering on the digital data, wherein the image enhancement mechanism is configured to improve the quality of digital data coming from sensor and enable it to be displayed on low dynamic range monitors.
2. The method as claimed in claim 1, wherein the image enhancement mechanism includes few steps:
receiving an input image of a scene by a fractional differentiation module and use determined multiplying factors to filter the image to get a detail image;
testing whether the detail image is within the processing limit of the hardware by a mechanism;
non-linearly mapping the dynamic range of the detail image by a module;
subtracting the detail image from input image to generate base image by a base image generation module;
reducing the dynamic range of the detail and base image by a dynamic range compression module; and
computing the summation of the detail image and base image based on the base scale factor parameter by an absolute summation module.
3. The method as claimed in claim 2, wherein the fractional differentiation module is designed using an application of normalized or non-normalized fractional order differential filter which extracts the detail from the input image to obtain detail image, where the detail image varies with varying a parameter ‘?’ which is a non-integer order of fractional differential filter being always less than 1 and varied to obtain the desired detail image.
4. The method as claimed in claim 2, wherein the detail image is checked within hardware processing limit by comparing the minimum and maximum value of the output of fractional differentiation module with set specified limits depending on the processing capability of the hardware.
5. The method as claimed in claim 2, wherein the detail image is mapped to hardware processing limits, if it is above the limits by linearly adjusting the values of the output of fractional differentiation module.
6. The method as claimed in claim 2, wherein fractional order differential filter is for detail enhancement of digital data coming from sensor and enable it to be displayed on dynamic range display module.
7. The method as claimed in claim 2, wherein the base image is separated from detail image by subtracting the detail image from the input image.
8. The method as claimed in claim 2, further comprising of a dynamic range compression mechanism configured to reduce the dynamic range of the incoming image data, wherein a histogram of input digital image is maintained, equalized, stored to enhance the input digital image and updated for every frame.
9. The method as claimed in claim 8, wherein the detail image and base image are passed to the dynamic range compression mechanism.
10. The method as claimed in claim 2, wherein the base scale factor mechanism is to adjust the ratio of base and detail and added in proportion as per base scale factor to get the final enhanced image.
11. The method of improving digital image comprising the steps of:
generating a temporary image after using a fractional differentiation module on the input image which contains all the detail in the image by varying the parameter ? to get the desired level of details;
checking the temporary image which contains extracted detail is within the hardware processing limits or not;
bringing the temporary image within the hardware processing limits if it isn’t using a map dynamic range module;
generating the base image by subtracting it from input image using mapped temporary image as detail image;
applying the dynamic range compression module on detail and base image so as to enable the image to be viewed on the monitors; and
adjusting the amount of base and detail in the final image finally to obtain the final enhanced image which can be viewed on monitors using base scale factor parameter.
, Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10, rule 13)
“An enhancement method for vision camera systems”
By
BHARAT ELECTRONICS LIMITED
Nationality: Indian
M/s. Bharat Electronics Limited, Corporate Office, Outer Ring Road, Nagavara, Bangalore-560045, Karnataka, India
The following specification particularly describes the invention and the manner in which it is to be performed.
Field of the invention
The present invention mainly relates to image processing and more particularly to the image/signal processing for a device (vision camera systems and particularly thermal camera systems) sensitive to wavelengths such as visible light and infrared radiation.
Background of the invention
Vision Camera System such as Charge-Coupled Device cameras and thermal camera may have Charge-Coupled Device (CCD) for visible light and/or thermal or infrared or IR sensor for infrared wavelengths depending on the desired application and requirement. The Vision camera systems are important in industrial, commercial, scientific and military applications such as surveillance, monitoring, inspection and vehicle sight particularly at night.
Thermal Camera or Night Vision Systems or Night Vision Devices have a thermal or infrared sensor consisting of a planar array of detectors called thermal sensor which is sensitive to infrared region of electromagnetic radiation. This sensor convert the incident infrared radiations to electrical signal which is signal processed so as to enable it to be displayed for viewer. These systems enable the viewer to view a scene irrespective of the time of the day and are not dependent on ambient lighting.
Image Sensor converts the incident electromagnetic radiation into electrical voltages. Thermal sensors are semiconductor devices sensitive to Infrared Radiations or Thermal Signature of a scene and convert incident infrared radiation to electrical voltages. Thermal signature depends on a number of factors including shape and size of the object, temperature of the object and the background, atmospheric conditions etc. Thermal sensor are bonded to CMOS based ROIC (Read-Out Integrated Circuit) to form a FPA (Focal Plane Array) and give digital data via analog-to-digital converter included as part of the infrared sensor which are multiplexed to the ROIC. The FPA is on a ceramic carrier as in un-cooled detectors or may be encapsulated in a Dewar and cooled by closed cycle sterling cooler as in cooled detector.
Visible radiations are the electromagnetic radiation which may be sensed by human eye and range from 400 nanometres to 900 nanometres. Infrared (IR) Radiation is an electromagnetic radiation having wavelengths from 700 nanometres to about 1 millimetre. IR Radiations are invisible radiations that may be caused or produced by heat and emitted in proportion to the temperature and emissivity of the object. A region of wavelength between 3-5 micrometre and 8-14 micrometre corresponds to minima in atmospheric absorption enabling their easy detection from a distance. Most of the thermal sensor work in these two infrared windows and give digital data having high dynamic range in order of 8192 to 65536 different discrete levels. The wavelength specified may be overlapping and not limiting.
Digital image including thermal image or infrared image, CCD image etc. is an array of two dimensional elements called picture elements or pixels, value of each of which represents the apparent brightness level measured by the image sensor including thermal sensor and vary from image source to image source having a fixed range depending on the sensor. A set of digital image is generated by a vision camera system at a rate called as frame per second where each frame is one digital image. Digital images such as thermal images may be of low contrast and noisy and must undergo image processing to produce an enhanced image.
A problem caused by dynamic range is mismatch of the signal dynamic range of sensed digital images such as thermal image and the dynamic range available at a display. Modern sensors particularly infrared sensors can produce images with 13-16 bit levels which usually exceed the usual 8-bit limit of display devices. Thus a method is required to reduce the dynamic range of the image and at the same time preserve the details in the image. By enhancing the details in the image from the image sensor (camera, sensor array) and or by improving the gain in the contrast of the image, the quality of the images recorded in the visible or infrared domain may be considerably improved.
A Fractional Differential-based Approach for Multi-scale Texture Enhancement,” presents six fractional differential masks and performs theoretical and experimental analysis on them. It also discusses multi-scale fractional differential masks for texture enhancement. The proposed masks are applied on the image to obtain a texture enhanced image. For high values of order the image no longer appears natural.
Another paper proposes a new image-enhancing algorithm based on the 2-D digital fractional order Savitzky–Golay differentiator (DFOSGD), and an unsupervised optimization algorithm for choosing the fractional-order parameter. At high value of fractional order the image no longer looks natural with artifacts coming up.
Further paper, explains a method of display and detail enhancement for high dynamic range infrared image. It involves application of bilateral filter to separate input image into the base component and detail component. Base and detail layer are refined using an adaptive Gaussian filter and base layer is projected to display range and detail layer is enhanced using an adaptive gain control approach and finally combined. Detail image is being obtained from the base image in this method.
For example, document CN1917576A describes an apparatus for use in enhancing the complex texture details of digital image in real-time. It comprises a memory module, a phase-locking/shift circuit, a fractional order differential mask convolution circuit and a maximum value comparator. The fractional order differential mask convolution circuit, the operation rules of 8 dedicated algorithm circuits use fractional order differential mask convolution scheme to achieve the spatial filter of digital image fractional order differential coefficient. It uses 8 fractional differential masks convoluted scheme to implement a digital image fractional differential spatial filtering.
Another, document CN101848319A describe a fractional calculus filter of digital images which is a circuit device for fractionally enhancing or smoothing complex texture detail characteristics of digital images at high precision. The filter comprises of RGB-to-HSI converter, a line memory group, a phase locking/shift circuit group, a fractional calculus mask convolution circuit, a maximum comparator and an HSI-to-RGB converter in cascade. Fractional calculus mask convolution algorithms are used by a first algorithm unit circuit to an eighth algorithm unit circuit in the fractional calculus mask convolution circuit.
Therefore there is a need in the art with method which enhances the details in the images by image/signal processing for vision camera systems and particularly thermal camera systems and to solve the above mentioned limitations.
Objective of the invention
The main objective of the present invention relates to a method for improving the infrared image and displaying it on monitors for vision camera system.
The present invention enables the viewer to view a scene irrespective of the time of the day and is not dependent on ambient lighting without mismatching of the dynamic signal range of sensed digital images.
Summary of the Invention
An aspect of the present invention is to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below.
Accordingly, an aspect of the present invention is to provide a method for vision camera systems. The step of the method includes converting an incident radiations into an electrical voltage by an image sensor which is a planar array of detectors, wherein the electrical voltages are multiplexed and rehabilitated as a digital data by an integrated circuit, providing straight radiation path and focusing incident radiation on the image sensor by an optical system, receiving a digital data coming from image sensor produced by each of the individual detectors at a particular rate by a processing system, applying image enhancement mechanism after pre-processing and NUC filtering on the digital data, wherein the image enhancement mechanism is configured to improve the quality of digital data coming from sensor and enable it to be displayed on low dynamic range monitors.
Other aspects, advantages, and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses exemplary embodiments of the invention.
Brief description of the drawings
The above and other aspects, features, and advantages of certain exemplary embodiments of the present invention will be more apparent from the following description taken in conjunction with the accompanying drawings in which:
Figure 1 is a flow diagram illustrating the vision camera system according to one embodiment of the present invention.
Figure 2 is a flowchart depicting the method of displaying the data from image sensor according to one embodiment of the present invention.
Figure 3 shows a block diagram of the Vision Camera System according to one embodiment of the present invention.
Figure 4 shows a flowchart of improving the image and particularly thermal image according to one embodiment of the present invention.
Persons skilled in the art will appreciate that elements in the figures are illustrated for simplicity and clarity and may have not been drawn to scale. For example, the dimensions of some of the elements in the figure may be exaggerated relative to other elements to help to improve understanding of various exemplary embodiments of the present disclosure. Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures.
Detailed description of the invention
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the invention as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention are provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
By the term “substantially” it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
Figs. 1 through 4, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way that would limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged communications system. The terms used to describe various embodiments are exemplary. It should be understood that these are provided to merely aid the understanding of the description, and that their use and definitions, in no way limit the scope of the invention. Terms first, second, and the like are used to differentiate between objects having the same terminology and are in no way intended to represent a chronological order, unless where explicitly stated otherwise. A set is defined as a non-empty set including at least one element.
The present invention relates to a method of signal processing for sensors sensitive to wavelengths such as visible light and infrared radiation including those capable of generating high dynamic range images and producing images with reduced dynamic range for vision camera systems and particularly thermal camera system using a non- integer order of differentiation to enhance the details in the image.
Recently fractional calculus is gaining interest for basic sciences and engineering application. Fractional differentiation of constant value or low frequency signal is usually non-zero as against the traditional integer order differentiation such as sobel, prewitt, roberts, second order laplace operator etc. where low frequency details of the image are lost. At the same time it non-linearly enhances middle and high frequency components of the signal. To more precisely define the fractional derivative filter, the following terminology is used:
f is a one dimensional function of any independentvariable ? or x.
D?f(x) is differential of function f w.r.t x of the order v which may be an integer or non- integer order. A number of definitions of fractional derivative exist of which Riemann-Liouville and Grumwald-Letnikov is the most popular. Cauchy expression for calculating mth order integration is given as:
By Riemann-Liouville left hand definition, for ?th order derivation of function f(x) such that m-1 < v < m is:
The expression can be simplified and expressed in terms of discrete addition instead of integration by numerical methods. For 2D image the minimum distance between any two pixel values is 1. Also the value of v is such chosen to obtain a proper 2D mask which is to be applied on the image.
The present invention relates to image processing for a device for vision and particularly night vision in which a sensor that is sensitive to visible or infrared region of electromagnetic spectrum delivers images which are displayed on a display arrangement having a dynamic range that may be lower than that of the sensor. The image processing method improves the raw images from the sensor using image enhancement techniques which may include detail- enhancement methods and contrast-enhancing methods, so that display on a display arrangement is made possible for viewing by a viewer. The method involving the application of fractional derivation gives detail information and is less susceptible to noise. The method described here is simple and yet gives good results over the other techniques for image enhancement of the vision systems.
The expression “raw data” in the instant disclosure refer to digital data or digital representation of analog electrical signals coming via analog-to-digital converter from each of the detector of sensor and particularly infrared sensor wherein output of individual detectors are multiplexed to the Read-Out Integrated Circuit and represent the brightness level measured by the detector and vary from image source to image source having a fixed range depending on the sensor.
The expression “pre-processed” in the instant disclosure refers to the digital data which is filtered from noise and arranged in two dimensional arrays to obtain an image frame.
The expression “noise” in the instant disclosure refers to the image noise that is all the unwanted additions to the signal coming from image sensor starting from the generation of signal from individual detector itself. It includes photon noise, thermal noise, impulse noise, structured noise such as fixed pattern noise and any other kinds of noise.
The expression “Image enhancement” in the instant disclosure refers to the methods that modify each of the elements of input digital image to a new value to produce output or enhanced image such that output or enhanced image is visually pleasing to the human eye than the input image.
The present invention comprises a main processing element for the extraction of detail image and base image from the noise filtered and non-uniformity corrected image. Acquired image is passed through detail extraction filter to get the detail component and all the details are subtracted from acquired image to get base image. Both the detail and base image undergo dynamic range compression to enable them to be displayed on lower dynamic range monitors. The two components are added in user defined parameter to obtain the final image. The method described in present invention is able to improve the contrast of the image and at the same time enhance the details with low susceptibility to noise.
Figure 1 is a flow diagram illustrating the vision camera system according to one embodiment of the present invention.
The figure shows the flow diagram of the vision camera system 106. An Image Sensor 104 may be a planar array of detectors which is sensitive to wavelengths such as visible light and/or infrared radiation including those capable of generating high dynamic range images. Processing Unit 102 may comprise a microprocessor, single or multi-core processor, microcontroller, and Field Programmable Gate Array which may be configured to perform signal processing or any other type of processor. The processing component is designed to communicate and interface with Image Sensor, Memory, Display Device and Control Unit. The processing unit is adapted to receive the generated control signal from the control unit, interpret the input control signal and perform various types of signal and image processing operations on the signals generated from 104 as per the user settings from control signal to obtain enhanced digital image signals and store and/or retrieve the digital image data stored in/from 100. Memory Component 100 temporarily stores the actual image amplitude value of every frame. This data is processed frame by frame and then displayed. The memory has as many addressable words as there are possible image amplitude values. All the default configurations may also be stored on memory component. Set of instructions to be performed by the processing component such as software code is also stored on 100. Microcontroller 101 is a device configured to provide interface to different component including the image sensor, display device and at the same time perform signal and image processing and generating control signals. Processing Unit is designed to execute instructions which are stored as software programs in non-volatile memory components and process the actual image amplitude signals from the volatile memory devices. Memory component may comprise of various types of memory devices which may include Random Access Memory (RAM), Read Only Memory (ROM), Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), SDRAMs, flash memory or other type of media/machine readable medium suitable for storing electronic instructions. Control Unit 103 is configured to obtain inputs from the user through user interface. It also generates various internal control signals. The enhanced thermal image is displayed on low dynamic range display device 105.
Figure 2 is a flowchart depicting the method of displaying the data from image sensor according to one embodiment of the present invention.
The figure is a flowchart depicting the method of displaying the data from image sensor 210. The image sensor 200 gives high dynamic range digital sensor data 202 or raw data. This raw data is pre-processed in Pre-Process Data block 201 to obtain a frame and filter noise. A frame is a two dimensional array of digital data obtained by combining 202 coming from ROIC for analog electrical voltages generated by the elements of the sensor at a particular rate called frame per second depending on the sensor configuration for a particular thermal scene.
Noise is one of the important factors affecting the quality of sensor data, effectiveness of image enhancement methods and ultimately the IR image. Noise may be filtered out using filter such as adaptive median filter, averaging filter etc. which may cause unwanted smoothening of the edges. The pre-processed output 203 is fed to NUC Correction component 204.
Individual elements of the thermal sensors i.e., each of the individual detectors differ due to the limitations of the manufacturing process. When exposed to uniform temperature these detectors may generate different response voltages. This difference is the cause of non-uniformity errors and are described in terms of gain and offset. All detectors should generate equal voltages when exposed to same intensity of radiation. Gain and offset correction maps are prepared based on this deviation from expected behaviour to correct the non-uniformity. Non-uniformity may be corrected using two-point method where gain and offset response of each element of the detector is approximated by measuring their response at two different calibration temperatures called two-point calibration method.
The Non Uniformity corrected output 205 is then passed through detail enhancement block 206 for image enhancement. Dynamic Range of the image is reduced and video encoded in Display Processing mechanism 207. The output image has to be displayed on a display device using video connector which may include S-video output, VGA (Visual Graphics Array) output, DVI (Digital Visual Input) output, USB (Universal Serial Bus) port or any other method of transferring the video data from the camera system to the display monitor. Finally, the improved thermal image 209 is displayed on low dynamic range display device 208.
Figure 3 shows a block diagram of the Vision Camera System according to one embodiment of the present invention.
The figure illustrates the block diagram of Vision Camera System 301 for producing, processing and displaying digital images from a given scene 300. Vision Camera System may include a Power Supply Component 322, Memory Component 316, Processing Component 306, Control Component 305, Display Component 321, Optics or Lens System 302, Shutter Component 317 and an image sensor 303.
The lens system 302 is a refractive optical element used to focus the incident radiation such as thermal signature or visible light from a scene 300 on the image sensor 303 by providing a straight path to the focused incident radiation from the 300 to 303. Shutter Component 317, periodically and automatically blocks the radiation path in case of a thermal scene generating infrared radiation to 303 for very small interval by a shutter to correct the offset table for each of the detector element of 303 for calibrating each element of 303 as spatial non-uniformity drifts with time. Control Component 305 is configured to obtain inputs from the user through user interface which may take one or more forms such as a touch input along with display, a keypad, buttons etc., user can vary parameters using various Controls such as Optics Control 307 to change settings of 302 which may include field of view, focus, zoom etc., Sensor Control 308 which may be used to control the output of 303 such as frame rate, integration time, gain, bad pixel replacement, data rate etc., IIR Parameter 309 which may be used to control the Infinite Impulse Response filter parameters for temporal noise removal in 306, BSF (Base scale factor) Parameter 310 which may vary the amount of detail in the enhanced image, Detail Parameter 311 which may be used to adjust the value of v that is non-integer or fractional order of fractional differentiation and DRC Parameter 312 to control the dynamic range compression mechanism in the method.
The processing component 306 may comprise a microprocessor, single or multi-core processor, microcontroller, Field Programmable Gate Array which may be configured to perform signal processing or any other type of processor. The processing component is designed to communicate and interface with components 303, 305, 316 and 321. The processing component 306 is adapted to receive the generated control signal from the control component, interpret the input control signal and perform various types of signal and image processing operations on the signals generated from 303 as per the user settings from control signal to obtain enhanced digital image signals and store and/or retrieve the digital image data stored in/from 316. The processing component consists of Pre-Processing Module 313, IIR Filter module 314, NUC Correction Module 315, Image Enhancement Module 318 and the Display Module 320. The output of 320 is fed to the display component 321 where the human can see the final enhanced thermal image 319 on 321.
The memory component 316 temporarily stores the actual image amplitude value of every frame. This data is processed frame by frame and then displayed. The memory has as many addressable words as there are possible image amplitude values. All the default configurations may also be stored on memory component. Set of instructions to be performed by the processing component is also stored on the 316. Processing Component is designed to execute instructions which are stored as software programs in non-volatile memory components and process the actual image amplitude signals from the volatile memory devices. Memory component may comprise of various types of memory devices which may include RAM, ROMs, PROMs, EPROMs, EEPROM, SDRAMs, flash memory or other type of media/machine readable medium suitable for storing electronic instructions. Power supply component 322 is configured to generate all the voltage requirement of different component of 301 such as for 302, 303, 306, 316, 317, 321, etc.
In one embodiment, the present invention relates to an enhancement method for vision camera systems, the method comprising: converting incident radiations into an electrical voltage by an image sensor which is a planar array of detectors, wherein the electrical voltages are multiplexed and rehabilitated as a digital data by an integrated circuit, providing straight radiation path and focusing incident radiation on the image sensor by an optical system, receiving a digital data coming from image sensor produced by each of the individual detectors at a particular rate by a processing system, and applying image enhancement mechanism after pre-processing and NUC filtering on the digital data, wherein the image enhancement mechanism is configured to improve the quality of digital data coming from sensor and enable it to be displayed on low dynamic range monitors.
Figure 4 shows a flowchart improving the image and particularly thermal image according to one embodiment of the present invention.
The figure illustrates the process flow diagram of image enhancement method. The output of the NUC correction module 401 is NUC corrected image 402 and is passed to Image Enhancement Module. The image enhancement mechanism includes few steps: receiving an input image of a scene by a fractional differentiation module 403 and use determined multiplying factors to filter the image to get a detail image, testing whether the detail image is within the processing limit of the hardware by a mechanism, non-linearly mapping the dynamic range of the detail image by a module, subtracting the detail image from input image to generate base image by a base image generation module, reducing the dynamic range of the detail and base image by a dynamic range compression module and computing the summation of the detail image and base image based on the base scale factor parameter by an absolute summation module. The fractional differentiation module is designed using an application of normalized or non-normalized fractional order differential filter which extracts the detail from the input image to obtain detail image, where the detail image varies with varying a parameter ‘?’ which is a non-integer order of fractional differential filter being always less than 1 and varied to obtain the desired detail image. The fractional order differential filter is for detail enhancement of digital data coming from sensor and enables it to be displayed on dynamic range display module. The detail image is checked within hardware processing limit by comparing the minimum and maximum value of the output of fractional differentiation module with set specified limits depending on the processing capability of the hardware. The obtained detail image 404 may or may not exceed the processing limits of the hardware. The detail image is mapped to hardware processing limits, if it is above the limits by linearly adjusting the values of the output of fractional differentiation module. In case it does, the output is brought within limit so as to make it possible for the hardware to do further processing from Map Dynamic Range Component 407 to non-linearly map the detail image within threshold limit. The limits may be set based on the processing limits of the hardware. Once the outputs are within the permissible limits this detail component 414 is passed through the Dynamic range compression mechanism 408. The dynamic range compression mechanism configured to reduce the dynamic range of the incoming image data. The histogram of input digital image is maintained, equalized, stored to enhance the input digital image and updated for every frame. This block may be based on a number of techniques including adaptive histogram equalization, histogram projection based methods. Base Image 415 is obtained by subtracting the detail image which is within the processing limits of the hardware that is either 412 or 413 from NUC corrected image 410 from 401. This base image 415 and detail image is also passed through the dynamic range compression block 409. The base scale factor 416 may be varied by the user and based on this parameter the two images may be merged to obtain the final processed image 418 which is fed to the video encoder block 417.
In another embodiment, the present invention may relates to a method of improving digital image comprising the steps of: generating a temporary image after using a fractional differentiation module on the input image which contains all the detail in the image by varying the parameter ? to get the desired level of details, checking the temporary image which contains extracted detail is within the hardware processing limits or not, bringing the temporary image within the hardware processing limits if it isn’t using a map dynamic range module, generating the base image by subtracting it from input image using mapped temporary image as detail image, applying the dynamic range compression module on detail and base image so as to enable the image to be viewed on the monitors and adjusting the amount of base and detail in the final image finally to obtain the final enhanced image which can be viewed on monitors using base scale factor parameter.
Those skilled in this technology can make various alterations and modifications without departing from the scope and spirit of the invention. Therefore, the scope of the invention shall be defined and protected by the following claims and their equivalents.
FIGS. 1-4are merely representational and are not drawn to scale. Certain portions thereof may be exaggerated, while others may be minimized. FIGS. 1-4 illustrate various embodiments of the invention that can be understood and appropriately carried out by those of ordinary skill in the art.
In the foregoing detailed description of embodiments of the invention, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description of embodiments of the invention, with each claim standing on its own as a separate embodiment.
It is understood that the above description is intended to be illustrative, and not restrictive. It is intended to cover all alternatives, modifications and equivalents as may be included within the spirit and scope of the invention as defined in the appended claims. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein,” respectively.
We Claim:
1. An enhancement method for vision camera systems, the method comprising:
converting an incident radiations into an electrical voltage by an image sensor which is a planar array of detectors, wherein the electrical voltages are multiplexed and rehabilitated as a digital data by an integrated circuit;
providing straight radiation path and focusing incident radiation on the image sensor by an optical system;
receiving a digital data coming from image sensor produced by each of the individual detectors at a particular rate by a processing system; and
applying image enhancement mechanism after pre-processing and NUC filtering on the digital data, wherein the image enhancement mechanism is configured to improve the quality of digital data coming from sensor and enable it to be displayed on low dynamic range monitors.
2. The method as claimed in claim 1, wherein the image enhancement mechanism includes few steps:
receiving an input image of a scene by a fractional differentiation module and use determined multiplying factors to filter the image to get a detail image;
testing whether the detail image is within the processing limit of the hardware by a mechanism;
non-linearly mapping the dynamic range of the detail image by a module;
subtracting the detail image from input image to generate base image by a base image generation module;
reducing the dynamic range of the detail and base image by a dynamic range compression module; and
computing the summation of the detail image and base image based on the base scale factor parameter by an absolute summation module.
3. The method as claimed in claim 2, wherein the fractional differentiation module is designed using an application of normalized or non-normalized fractional order differential filter which extracts the detail from the input image to obtain detail image, where the detail image varies with varying a parameter ‘?’ which is a non-integer order of fractional differential filter being always less than 1 and varied to obtain the desired detail image.
4. The method as claimed in claim 2, wherein the detail image is checked within hardware processing limit by comparing the minimum and maximum value of the output of fractional differentiation module with set specified limits depending on the processing capability of the hardware.
5. The method as claimed in claim 2, wherein the detail image is mapped to hardware processing limits, if it is above the limits by linearly adjusting the values of the output of fractional differentiation module.
6. The method as claimed in claim 2, wherein fractional order differential filter is for detail enhancement of digital data coming from sensor and enable it to be displayed on dynamic range display module.
7. The method as claimed in claim 2, wherein the base image is separated from detail image by subtracting the detail image from the input image.
8. The method as claimed in claim 2, further comprising of a dynamic range compression mechanism configured to reduce the dynamic range of the incoming image data, wherein a histogram of input digital image is maintained, equalized, stored to enhance the input digital image and updated for every frame.
9. The method as claimed in claim 8, wherein the detail image and base image are passed to the dynamic range compression mechanism.
10. The method as claimed in claim 2, wherein the base scale factor mechanism is to adjust the ratio of base and detail and added in proportion as per base scale factor to get the final enhanced image.
11. The method of improving digital image comprising the steps of:
generating a temporary image after using a fractional differentiation module on the input image which contains all the detail in the image by varying the parameter ? to get the desired level of details;
checking the temporary image which contains extracted detail is within the hardware processing limits or not;
bringing the temporary image within the hardware processing limits if it isn’t using a map dynamic range module;
generating the base image by subtracting it from input image using mapped temporary image as detail image;
applying the dynamic range compression module on detail and base image so as to enable the image to be viewed on the monitors; and
adjusting the amount of base and detail in the final image finally to obtain the final enhanced image which can be viewed on monitors using base scale factor parameter.
Abstract
An enhancement method for vision camera systems
The invention relates to an image/signal processing for a device sensitive to wavelengths such as visible light and infrared radiation. In an embodiment, the processing includes converting an incident radiations into an electrical voltage by an image sensor which is a planar array of detectors, providing straight radiation path and focusing incident radiation on the image sensor by an optical system, receiving a digital data coming from image sensor produced by each of the individual detectors at a particular rate by a processing system and applying image enhancement mechanism after pre-processing and NUC filtering on the digital data. The image enhancement mechanism is configured to improve the quality of digital data coming from sensor and enable it to be displayed on low dynamic range monitors.
Figure 2 (to be published)
| # | Name | Date |
|---|---|---|
| 1 | 201741011803-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |
| 1 | PROOF OF RIGHT [31-03-2017(online)].pdf | 2017-03-31 |
| 2 | 201741011803-PROOF OF ALTERATION [04-10-2024(online)].pdf | 2024-10-04 |
| 2 | Form 5 [31-03-2017(online)].pdf | 2017-03-31 |
| 3 | Form 3 [31-03-2017(online)].pdf | 2017-03-31 |
| 3 | 201741011803-IntimationOfGrant14-12-2023.pdf | 2023-12-14 |
| 4 | Drawing [31-03-2017(online)].pdf | 2017-03-31 |
| 4 | 201741011803-PatentCertificate14-12-2023.pdf | 2023-12-14 |
| 5 | Description(Complete) [31-03-2017(online)].pdf_316.pdf | 2017-03-31 |
| 5 | 201741011803-Written submissions and relevant documents [11-12-2023(online)].pdf | 2023-12-11 |
| 6 | Description(Complete) [31-03-2017(online)].pdf | 2017-03-31 |
| 6 | 201741011803-Correspondence to notify the Controller [24-11-2023(online)].pdf | 2023-11-24 |
| 7 | Form 26 [05-07-2017(online)].pdf | 2017-07-05 |
| 7 | 201741011803-FORM-26 [24-11-2023(online)].pdf | 2023-11-24 |
| 8 | Correspondence by Agent_Power of Attorney_14-07-2017.pdf | 2017-07-14 |
| 8 | 201741011803-US(14)-HearingNotice-(HearingDate-29-11-2023).pdf | 2023-11-06 |
| 9 | 201741011803-Response to office action [14-09-2022(online)].pdf | 2022-09-14 |
| 9 | Correspondence by Agent_As Filed_14-07-2017.pdf | 2017-07-14 |
| 10 | 201741011803-ABSTRACT [21-10-2021(online)].pdf | 2021-10-21 |
| 10 | 201741011803-FORM 18 [13-08-2018(online)].pdf | 2018-08-13 |
| 11 | 201741011803-CLAIMS [21-10-2021(online)].pdf | 2021-10-21 |
| 11 | 201741011803-FER.pdf | 2021-10-17 |
| 12 | 201741011803-COMPLETE SPECIFICATION [21-10-2021(online)].pdf | 2021-10-21 |
| 12 | 201741011803-Proof of Right [19-10-2021(online)].pdf | 2021-10-19 |
| 13 | 201741011803-DRAWING [21-10-2021(online)].pdf | 2021-10-21 |
| 13 | 201741011803-OTHERS [21-10-2021(online)].pdf | 2021-10-21 |
| 14 | 201741011803-FER_SER_REPLY [21-10-2021(online)].pdf | 2021-10-21 |
| 15 | 201741011803-DRAWING [21-10-2021(online)].pdf | 2021-10-21 |
| 15 | 201741011803-OTHERS [21-10-2021(online)].pdf | 2021-10-21 |
| 16 | 201741011803-COMPLETE SPECIFICATION [21-10-2021(online)].pdf | 2021-10-21 |
| 16 | 201741011803-Proof of Right [19-10-2021(online)].pdf | 2021-10-19 |
| 17 | 201741011803-FER.pdf | 2021-10-17 |
| 17 | 201741011803-CLAIMS [21-10-2021(online)].pdf | 2021-10-21 |
| 18 | 201741011803-FORM 18 [13-08-2018(online)].pdf | 2018-08-13 |
| 18 | 201741011803-ABSTRACT [21-10-2021(online)].pdf | 2021-10-21 |
| 19 | 201741011803-Response to office action [14-09-2022(online)].pdf | 2022-09-14 |
| 19 | Correspondence by Agent_As Filed_14-07-2017.pdf | 2017-07-14 |
| 20 | 201741011803-US(14)-HearingNotice-(HearingDate-29-11-2023).pdf | 2023-11-06 |
| 20 | Correspondence by Agent_Power of Attorney_14-07-2017.pdf | 2017-07-14 |
| 21 | 201741011803-FORM-26 [24-11-2023(online)].pdf | 2023-11-24 |
| 21 | Form 26 [05-07-2017(online)].pdf | 2017-07-05 |
| 22 | 201741011803-Correspondence to notify the Controller [24-11-2023(online)].pdf | 2023-11-24 |
| 22 | Description(Complete) [31-03-2017(online)].pdf | 2017-03-31 |
| 23 | 201741011803-Written submissions and relevant documents [11-12-2023(online)].pdf | 2023-12-11 |
| 23 | Description(Complete) [31-03-2017(online)].pdf_316.pdf | 2017-03-31 |
| 24 | 201741011803-PatentCertificate14-12-2023.pdf | 2023-12-14 |
| 24 | Drawing [31-03-2017(online)].pdf | 2017-03-31 |
| 25 | Form 3 [31-03-2017(online)].pdf | 2017-03-31 |
| 25 | 201741011803-IntimationOfGrant14-12-2023.pdf | 2023-12-14 |
| 26 | Form 5 [31-03-2017(online)].pdf | 2017-03-31 |
| 26 | 201741011803-PROOF OF ALTERATION [04-10-2024(online)].pdf | 2024-10-04 |
| 27 | PROOF OF RIGHT [31-03-2017(online)].pdf | 2017-03-31 |
| 27 | 201741011803-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |
| 1 | 201741011803searchstratgyE_15-04-2021.pdf |