Abstract: A method for reading text on distant objects in a foggy environment 800, the method comprising: capturing plurality of running images by a capturing device 810, sampling and converting the captured running images using an analog to digital convertor to a digital format 820, computing an average air light of the captured images in real time in order to estimate the dark channel prior 830, converting a domain and color space of processed image from RGB to HSV 840, enhancing the contrast of the converted image (HSV) by contrast enhancing module in order to enhance clarity of the text on the captured image 850 and converting the domain and color space of enhanced image from HSV to RGB in order to view clear text on the captured image 860. Figure 8 (for publication)
DESC:Field of the invention
The present invention mainly relates to a method for reading text on distant objects in foggy environment.
Background of the invention
Currently, the video surveillance system has been dynamically promoted and used in various industries and places, such as road surveillance, air surveillance, sea surveillance, residential security, port logistics and other needs by video surveillance image content intelligent analysis, to achieve a particular target tracking and recognition, especially for the security system for locating suspicious targets is significant.
In an example, a coastal region scenario where surveillance becomes top priority with identification of ships and boats moving around. Text on the ships becomes almost invisible as seen by a camera as fog and haze are a common phenomenon on land and ocean. Images and Video captured in sea-based scenarios [Fig 1] are usually degraded by turbid medium involving water droplets, water vapor, and atmospheric particles. Light gets scattered and absorbed in this medium interaction before reaching sensor and hence become hazy. Object recognition, identification and readability of the text carried by those objects in such hazy video become difficult particularly in sea-based surveillance, where distant objects are presented in smaller number of pixels.
Moreover, in surveillance systems input video will be compressed and transported to a remote location which will be a central monitoring station. In this case, readability of the text is still more affected because of compression and de-compression loss of the image/video data. Video affected with the fog and haze needs to be enhanced blur removal and then the contrast has to be increased to get clear video for identification of objects.
One of the conventional systems and method discloses a method for removing fog from the images/videos independent of the density or amount of the fog and free of user intervention which may reveal the text from the fog affected which is of concern in the present work. The referred method suggests Air Light map refinement which is difficult to be implemented in real time video based applications. Also, the method discusses the requirement of motion vector computation which require multiple frame data that needs more memory space, which reduces through put as well because of increase in latency.
Another conventional system discusses the technique to remove haze from the digital images based on the estimate of the light that is contributed by the haze, and how clearer digital images can be generated. An optical flow between the clearer digital images is then computed and refined based on the same optical flow to further clear the haze from the images in an iterative process to improve visibility of the objects in the digital images. Also, it proposes to estimate motion parameter across digital images which is difficult to implement in real time video particularly surveillance systems.
Further conventional systems and method discloses a method and a system for adaptive image enhancement. Embodiments of this method include measurement of image quality of a pixel region in a frame in source video by applying operations based on the image classification of the frame. The method discusses how an analysis of the image's spectral histogram will determine the inherent resolution of the image. Also, spectral measurement is performed by use of either band pass filters or by a Discrete Cosine Transform (DCT) in one or two dimensions.
Another conventional method discusses a method for determining if an input image is a foggy image includes determining an average luminance grey level of the input image, performing Sobel image processing on the input image to generate a Sobel image of the input image when the average luminance grey level of the input image is between a first image average luminance and a second image average luminance, determining a first normalization value and a second normalization value of the input image, determining a mean value and a standard deviation of the Sobel image when the first normalization value and the second normalization value are less than a first threshold value, and determining the input image as a foggy image when a sum of the mean value and the standard deviation is less than a second threshold value. Also it include, a cleaning method for foggy images comprises using a stimulus value of a light chromaticity (LC) of an input image to generate an airlight luminance, using red-green-blue (RGB) tri-stimulus values of the input image to generate an attenuation coefficient, using the airlight luminance and the attenuation coefficient to recover the input image for generating a primary reduction image, transforming the primary reduction image into YCbCr color model and boosting values of Ch and Cr values of the primary reduction image to enhance chromaticity of the primary reduction image, redistributing weighting of a probability density function of an RGB gray level histogram after enhancing chromaticity of the primary reduction image, and enhancing luminance of the primary reduction image through histogram equalization after enhancing chromaticity of the primary reduction image to generate a final reduction image.
There is still a need of an invention which solves the above defined problems and provides a method for reading text on distant objects in foggy environment.
Summary of the Invention
An aspect of the present invention is to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below.
Accordingly, an aspect of the present invention relates to a method for reading text on distant objects in a foggy environment 800, the method comprising: capturing plurality of running images by a capturing device 810, sampling and converting the captured running images using an analog to digital convertor to a digital format 820, computing an average air light of the captured images in real time in order to estimate the dark channel prior 830, converting a domain and color space of processed image from RGB to HSV 840, enhancing the contrast of the converted image (HSV) by contrast enhancing module in order to enhance clarity of the text on the captured image 850andconverting the domain and color space of enhanced image from HSV to RGB in order to view clear text on the captured image 860.
Another aspect of the present invention relates to an apparatus for reading text on distant objects in a foggy environment, the apparatus comprising: a video ADC coupled and configured to receive and convert the captured images into digital format, a FPGA coupled and configured to receive and process the converted image in order to view clear text on the captured image, wherein the FPGA comprises processor configured to perform the steps: sampling and converting the captured running images using an analog to digital convertor to a digital format, computing an average air light of the captured images in real time in order to estimate the dark channel prior, converting a domain and color space of processed image from RGB to HSV, enhancing the contrast of the converted image (HSV) by contrast enhancing module in order to enhance clarity of the text on the captured image and converting the domain and color space of enhanced image from HSV to RGB in order to view clear text on the captured image and a video DAC is coupled and configured to receive and convert the processed image into analog format.
Other aspects, advantages, and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses exemplary embodiments of the invention.
Brief description of the drawings
The above and other aspects, features, and advantages of certain exemplary embodiments of the present invention will be more apparent from the following description taken in conjunction with the accompanying drawings in which:
Figure 1shows an images and video captured in sea-based scenarios.
Figure 2 illustrates a flow diagram foridentifying and reading text on smaller objects in real time video according to an exemplary implementation of the present disclosure.
Figure 3 illustrates a flow diagram summarizes to estimate the dark channel prior on all the individual frames in RGB domain, convert the domain of image frame under consideration to HSV, retain H component as is, apply Gamma Curve correction on S component and CLAHE (Contrast Limitation)on V component followed by image re-construction according to an exemplary implementation of the present disclosure.
Figure 4 illustrates flow diagram for different steps of de-hazing using dark channel prior methods, involving Air light estimation, Transmission map Estimation and Image reconstruction according to an exemplary implementation of the present disclosure.
Figure 5 shows an example capturing of an image, i.e. typically air molecules diameters are in the range of 10-4 µm, haze in 0.01-1 µm and fog in 1-10 µm range according to an exemplary implementation of the present disclosure.
Figure 6 shows an example processing of varying the contrast limitation parameter to get good contrast to see the clear text on the distant objectsaccording to an exemplary implementation of the present disclosure.
Figure 7 shows an example apparatus used for realization of the video processing method for reading the text on distant objects in a foggy environment according to an exemplary implementation of the present disclosure.
Figure 8 shows amethod for reading text on distant objects in a foggy environment according to an exemplary implementation of the present disclosure.
Persons skilled in the art will appreciate that elements in the figures are illustrated for simplicity and clarity and may have not been drawn to scale. For example, the dimensions of some of the elements in the figure may be exaggerated relative to other elements to help to improve understanding of various exemplary embodiments of the present disclosure.
Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures.
Detailed description of the invention
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the invention as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention are provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
By the term “substantially” it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic is intended to provide.
Figs. 1 through 8, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way that would limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system. The terms used to describe various embodiments are exemplary. It should be understood that these are provided to merely aid the understanding of the description, and that their use and definitions, in no way limit the scope of the invention. Terms first, second, and the like are used to differentiate between objects having the same terminology and are in no way intended to represent a chronological order, unless where explicitly stated otherwise. A set is defined as a non-empty set including at least one element.
The various embodiments of the present disclosure describe about a method and apparatus to process the incoming video for reading text on distant objects by removing fog and haze effects in real time without calling for conventional methods of matrix inversion, motion vector estimation wherein minimum two numbers of frames are required to improve perceptual quality of video and a method for use in an industrial environment.
The present invention relates to a method and apparatus to process the incoming video for reading text on distant objects by removing fog and haze effects in real time without calling for conventional methods of matrix inversion, motion vector estimation wherein minimum two numbers of frames are required to improve perceptual quality of video. It is a non-automated method wherein intervention is required till a legible and clear text is visible to the satisfactory reading of the user.
Embodiments in this invention specifies a pragmatic methodology and an apparatus to improve readability of the text in real time video treating video as an independent sequence of images without any dependency across the sequential images except to the atmospheric conditions.
In surveillance systems, especially when reading text on distant objects is of interest, automated enhancement of whole image may not be so useful as the text on the object of interest will occupy relatively less amount of area (pixels). Also, automated methods for enhancement of visibility of the obscured image/ video may not give clarity in identifying smaller objects and text overlay on the objects. User intervention is required to decide when the text on object is clear and readability is acceptable satisfactorily.
Figure 2 illustrates a flow diagram for identifying and reading text on smaller objects in real time video according to an exemplary implementation of the present disclosure.
The figure illustrates a flow diagram for identifying and reading text on smaller objects in real time video, wherein the said method is applied on all the individual images at respective frame rates, typically 40msec for PAL standard video. The method for identifying and reading text on smaller objects comprising few steps: capturing plurality of running images (video) by a capturing device, removing fog and haze from the captured running images, converting color space of processed image from RGB to HSV, enhancing the contrast of the converted image (HSV) by contrast enhancing module in order to enhance clarity of the text on the captured image and converting the domain of enhanced image from HSV to RGB in order to view text clearly on the captured image. The capturing device may be an image capturing device or a video capturing device i.e. camera, camcorder, photographic telescope, telescopic camera, etc.
Figure 3 illustrates a flow diagram summarizes estimate the dark channel prior on all the individual frames in RGB domain, convert the domain of image frame under consideration to HSV, retain H component as is, apply Gamma Curve correction on S component and CLAHE (Contrast Limitation) on V component followed by image re-construction [figure 3]. Unlike conventional methods of video enhancement, manual intervention is required with varying parameters to make the text clearer and more readable.
Figure 4 illustrates a flow diagram for different steps of de-hazing using dark channel prior methods, involving Air light estimation, Transmission map Estimation and Image reconstruction.
Figure 5 shows an example capturing of an image, i.e. typically air molecules diameters are in the range of 10-4 µm, haze in 0.01-1 µm and fog in 1-10 µm range according to an exemplary implementation of the present disclosure.
In initial step, dark channel prior is estimated by computing atmospheric air light. Typically, air molecules diameters are in the range of 10-4 µm, haze in 0.01-1 µm and fog in 1-10 µm range [Figure 5]. Light gets attenuated because of these atmospheric particles whose property can be modeled mathematically as factor of scattering coefficient of atmosphere. Air light Scattering, light reflected from the atmospheric particles also gets scattered with increasing distance. Constructive interference makes image scene brighter and vice versa. Taking these into account, physical model of the Image seen by sensor affected by fog or haze is written as follows.
I(x,y) = J(x,y)t(x,y) + A(1 - t(x,y)) -- (1)
t(x,y) = e-(ßd(x,y)) -- (2)
where I is observed as Image intensity, J is fog or haze free Image intensity, A is global Atmospheric constant, t(x,y) is medium transmission component, ß is scattering coefficient of atmosphere, d(x,y) is the object distance from observer.
In foggy images, intensity of the dark pixels is mainly contributed by air light. Therefore, dark channel can closely provide the estimation of the atmospheric light. For an Image G dark channel prior is calculated by applying operator Ð.
Ð(G)= (_(k,l)?p(x,y)^min? )((_c?{r,g,b}^min? )G^c (k,l) ) ----- (3)
where Gc is RGB color vector and p(x,y) is local NxN patch centered at (x,y). Thus, Dark Channel is local minima in RGB color channel which is zero empirically. Hence Dark Channel of Fog free image J tends to 0.
( Ð(J) -->0)-- (4).
Applying Dark Channel operator on equation (1)
Ð(I(x,y)) = t(x,y) Ð(J(x,y)) + A( 1-t(x,y) ) -- (5)
J is fog free image hence putting Ð(J) =0 in the equation (2) and Normalizing with A gives
Ð(In(x,y))= 1 - t(x,y) --(6)
Where In = I/A; Transmission is estimated as
t(x,y) = 1 - Ð(In(x,y). -- (7)
Estimation of fog free image intensity J from observed image I, is limited by absence of explicit depth information as well as difficulty of automatic and accurate calculation of Atmospheric Constant. Atmospheric air light component A is present in the dark channel in form of highest intensity values, which is usually located in the sky region in the image and is computed as.
A = I(argmax(x,y) Idark(x,y)) -- (8)
Also, in transmission map estimation, t(x,y) is defined as
t(x,y)= 1-(_(k,l)?p(x,y)^min? )((_c?{r,g,b}^min? )G^c (k,l) ) -- (9)
Finally fog free image J is estimated as
J= (I(x,y)-A)/max?(t(x,y),t_0 ) + A-- (10)
where (x,y) are locations at which dark channel maximum intensities are located and at t_0=0.1 haze influence is at maximum and t_0=1.0 haze influence is minimum. In surveillance systems, as air light changes do not occur considerably with time, the present method proposes, calculation of moving average air light value for 16 frames retrospective for a single image under consideration where in the atmospheric air light is calculated as the sensor samples are received by the sensor signal processing engine in real time. Also, the method proposes quantization of the variable t_0 into discrete values (integer number) which reduces the fog and haze affect from image where in user intervention is required in modifying the value in steps from a graphical user interface that gives clearer image/video.
In applying the above methods of de-haze/de-fog on the incoming image/video, contrast get degraded perceivably due to which the text on the distant object gets obscured.
To improve visibility of text, contrast improvement using Gamma Corrections and contrast limited adaptive histogram equalization (CLAHE) is used. To retain color component in image, the processing domain of the image is changed from RGB to HSV. ‘?’ Gamma Correction [figure 4] is applied on ‘S’ (saturation) component and CLAHE is applied on ‘V’ value components to improve the contrast by retaining the hue component. Gamma ‘?' improves the contrast of the image’s ‘S’ saturation component and balances it from over saturation. Adaptive histogram equalization (AHE) improves local contrast on the image’s ‘V’ value component, bringing out more details of colour value in the image based on the histogram of a local window centred at a given pixel. In applying CLAHE, the whole image is divided into 64 small numbers of con-textual and non-overlapping regions and the mapping for a given pixel is calculated as a bi-linear interpolation of mappings derived from nearby contextual regions by using a weighted-sum of the mapping of the neighbouring regions. CLAHE is applied on these small regions considering them as individual images and then stitched together as whole image. The contrast of neighbouring regions can be varied by user as a GUI parameter varying which user can know the affect of the limiting factor and see enhancement of text region which is occupying less number of pixels to identify the text. Image reconstruction can be done followed by converting domain back from HSV to RGB domain. The methods discussed in the present invention are ideal for applying to all individual images in real time by computing engine like FPGA/Processor (Figure 7).
In figure 4, the method proposed is for reading text on distant objects particularly in fog affected environment. The incoming video stream is sampled using an appropriate Analog to Digital Converter and the buffered samples are fed to real time engine like FPGA.
In one embodiment, the proposed method for reading text on distant objects particularly in fog affected environment comprises few steps: computation of Air light for the samples in real time as they are acquired in the hardware, averaging of air light computed for every 16 frames of the image of video acquired, estimating of Dark Channel prior for the frame acquired, varying the parameter t_0 from 0.1 to 1.0 in steps of 1/16 and checking the fog affect removal from the image re-constructed after above said methods, converting the domain of the image from RGB to HSV, retaining the Hue ‘H’ component for the whole image as is (No modification), applying ‘?' Gamma Correction on the ‘S’ component of the whole image with curve seed being varied from GUI (user intervention). Further, dividing the whole image into 64 smaller regions (sub-blocks) without overlap, applying the CLAHE method step on each individual sub block with 3X3 pixel region as a sub-block only on value components of the sub blocks, varying the contrast limitation parameter such that to get good contrast to see the clear text on the distant objects [Figure 6].
Stitching small sub blocks processed in the above steps into larger image of interest wherein the text on the smaller objects is visible with user intervention with required parameters being varied.
The above-mentioned cascaded procedures/methods for reading text on distant objects in a fog affected scenarios can be applied to all videos in real time for various formats like PAL/High definition category as complex mathematical computations such as matrix inversion and motion vectors are not required.
In another embodiment, the video processing method for reading the text on distant objects in a foggy environment comprises few steps: The methods and sequence of method steps as per figure 3 are applied upon all individual images of video without any dependency.
The video acquired from the sampler is in RGB domain and Air light of the image under consideration is computed in real time as the samples are obtained to estimate the Dark Channel Prior. Estimating Dark Channel Priori on the raw image in RGB domain involves transmission map estimation.
An average value of the Air light computed for the past 16 frames is stored to be used for present frame under consideration. The value of variable parameter t_0 from 0.1 to 1.0 is varied in 16 steps values by the user to get the optimal fog removal experience as per the equation of Dark Channel Prior.
In another embodiment, the video processing method for reading the text on distant objects in a foggy environment further comprises: representation domain of the image that is fog free after process is RGB and is converted into HSV domain.
Contrast enhancement methods as per figure 3, in which the ‘H’ component of the image pixels is retained as is and no further processing.
‘?' Gamma Correction is applied on the ‘S’ component of all pixels of the fog free image referred and that the seed value of ‘?' is varied by the user from a chosen interface mode (Graphical User Interface).
The ‘V’ component of the whole image is divided into 64 sub-blocks wherein
CLAHE method is applied on all the sub-blocks individually.
The parameter of contrast enhancement is varied by the user from a chosen interface mode (Graphical User Interface).
The final image is stitched together from all the sub-blocks.
In another embodiment, the video processing method for reading the text on distant objects in a foggy environment further comprising: the domain of the image so enhanced using the method is converted into RGB again, the images so formed as per above are sent sequentially to form video wherein the text is visible for user’s acceptance.
Figure 7 shows an example apparatus used for realization of the video processing method for reading the text on distant objects in a foggy environment according to an exemplary implementation of the present disclosure.
The figure shows an example apparatus used for realization of the video processing method for reading the text on distant objects in a foggy environment. The apparatus comprising: a video ADC coupled and configured to receive and convert the captured images into digital format, an FPGA coupled and configured to receive and process the converted image in order to view clear text on the captured image, wherein the FPGA comprises processor configured to perform the steps: sampling and converting the captured running images using an analog to digital convertor to a digital format, computing an average air light of the captured images in real time in order to estimate the dark channel prior, changing color space of processed image from RGB to HSV, enhancing the contrast of the converted image (HSV) by contrast enhancement module in order to enhance clarity of the text on the captured image and again changing the color space of enhanced image from HSV to RGB in order to view clear text on the captured image, a video DAC is coupled and configured to receive and convert the processed image into analog format.
In the present invention apparatus, ADC is a video Analog to Digital Converter with maximum sampling frequency of about 27 MHz, about 16 bits wide and its interface with FPGA being developed using VHDL language. A SRAM is a static memory with capacity of about 1MB such that a total image frame can be stored and retrieved within a time frame of about 40msec. A DPRAM is dual ports RAM with depth of about 2MB and width of about 24bit to store and retrieve RGB color space image data simultaneously. The Video DAC is a video digital to analog converter with maximum sampling frequency of about 27MHZ with 30 bits depth. All the drivers including serial port are developed using VHDL code inside FPGA to interface with above chipset on PCB. A Serial port uses RS232 protocol for interface.
Figure 8 shows a method for reading text on distant objects in a foggy environment according to an exemplary implementation of the present disclosure.
The method for reading text on distant objects in a foggy environment 800, the method comprising: capturing plurality of running images by a capturing device 810, sampling and converting the captured running images using an analog to digital convertor to a digital format 820, computing an average air light of the captured images in real time in order to estimate the dark channel prior 830, converting a domain and color space of processed image from RGB to HSV 840, enhancing the contrast of the converted image (HSV) by contrast enhancing module in order to enhance clarity of the text on the captured image 850 and converting the domain and color space of enhanced image from HSV to RGB in order to view clear text on the captured image 860.
In another embodiment, estimating the dark channel prior comprises few steps: computing an air light value for each frame during capture phase of the image, storing a running average of air light value for 16 frames in sequence, estimating a transmission map of the image frame and removing an effect of fog and haze in quantized number of 16 steps to make it a fog free image and estimate the dark channel prior.
In another embodiment, enhancing the contrast of the converted image (HSV) by contrast enhancing module comprises few steps: retaining the Hue ‘H’ component for the whole image, applying ‘?’ Gamma Correction on ‘S’ (saturation) component, wherein applying Gamma correction ‘?' improves the contrast of the image’s ‘S’ saturation component and balances it from over saturation, dividing the ‘V’ component into sub-blocks and applying contrast limited adaptive histogram equalization (CLAHE)on ‘V’ value component to improve the contraston the image’s ‘V’ value componentby retaining the hue componentand stitching the sub-blocks in order to form a final image (HSV).
The present invention method steps are applied on all the individual images at respective frame rates, typically about 40msec for PAL standard video. In another embodiment, the method steps can be applied for all the video formats PAL, High Definition videos. The method steps are applied on all individual images of video (running images) without any dependency.
The parameters of contrast enhancement are varied by user from an interface mode (Graphical User Interface). The captured images are processed and sent sequentially to form video (running images), where the (image reconstruction) text is visible for user’s acceptance. The method for reading the text on distant objects in a foggy environment is applied for all the video formats PAL, High Definition videos.
Those skilled in this technology can make various alterations and modifications without departing from the scope and spirit of the invention. Therefore, the scope of the invention shall be defined and protected by the following claims and their equivalents.
FIGS. 1-8 are merely representational and are not drawn to scale. Certain portions thereof may be exaggerated, while others may be minimized. FIGS. 1-8 illustrate various embodiments of the invention that can be understood and appropriately carried out by those of ordinary skill in the art.
In the foregoing detailed description of embodiments of the invention, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description of embodiments of the invention, with each claim standing on its own as a separate embodiment.
It is understood that the above description is intended to be illustrative, and not restrictive. It is intended to cover all alternatives, modifications and equivalents as may be included within the spirit and scope of the invention as defined in the appended claims. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein,” respectively.
,CLAIMS:
1. A method for reading text on distant objects in a foggy environment, the method comprising:
capturing plurality of running images by a capturing device 810;
sampling and converting the captured running images using an analog to digital convertor to a digital format 820;
computing an average air light of the captured images in real time in order to estimate the dark channel prior 830;
converting a domain and color space of processed image from RGB to HSV 840;
enhancing the contrast of the converted image (HSV) by contrast enhancing module in order to enhance clarity of the text on the captured image 850; and
converting the domain and color space of enhanced image from HSV to RGB in order to view clear text on the captured image 860.
2. The method as claimed in claim 1, wherein estimating the dark channel prior comprises few steps:
computing an air light value for each frame during capture phase of the image;
storing a running average of air light value for 16 frames in sequence;
estimating a transmission map of the image frame; and
removing an effect of fog and haze in quantized number of 16 steps to make it a fog free image and estimate the dark channel prior.
3. The method as claimed in claim 1, wherein enhancing the contrast of the converted image (HSV) by contrast enhancing module comprises few steps:
retaining the Hue ‘H’ component for the whole image;
applying ‘?’ Gamma Correction on ‘S’ (saturation) component, wherein applying Gamma correction ‘?' improves the contrast of the image’s ‘S’ saturation component and balances it from over saturation;
dividing the ‘V’ component into sub-blocks and applying contrast limited adaptive histogram equalization (CLAHE)on ‘V’ value component to improve the contraston the image’s ‘V’ value componentby retaining the hue component; and
stitching the sub-blocks in order to form a final image (HSV).
4. The method as claimed in claim 1, wherein the method steps are applied on all the individual images at respective frame rates, typically about 40msec for PAL standard video.
6. The method as claimed in claim 1, wherein the method steps are applied on all individual images of video (running images) without any dependency.
7. The method as claimed in claim 1, wherein the parameters of contrast enhancement are varied by user from an interface mode (Graphical User Interface).
8. The method as claimed in claim 1, wherein the captured images are processed and sent sequentially to form video (running images), where the (image reconstruction) text is visible for user’s acceptance.
9. The method as claimed in claim 1, wherein the method for reading the text on distant objects in a foggy environment is applied for all the video formats PAL, High Definition videos.
10. An apparatus for reading text on distant objects in a foggy environment, the apparatus comprising:
a video ADC coupled and configured to receive and convert the captured images into digital format;
a FPGA coupled and configured to receive and process the converted image in order to view clear text on the captured image, wherein the FPGA comprises processor configured to perform the steps:
sampling and converting the captured running images using an analog to digital convertor to a digital format;
computing an average air light of the captured images in real time in order to estimate the dark channel prior;
converting a domain and color space of processed image from RGB to HSV;
enhancing the contrast of the converted image (HSV) by contrast enhancing module in order to enhance clarity of the text on the captured image; and
converting the domain and color space of enhanced image from HSV to RGB in order to view clear text on the captured image;
a video DAC is coupled and configured to receive and convert the processed image into analog format.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 201941012884-IntimationOfGrant16-12-2024.pdf | 2024-12-16 |
| 1 | 201941012884-PROVISIONAL SPECIFICATION [30-03-2019(online)].pdf | 2019-03-30 |
| 1 | 201941012884-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |
| 2 | 201941012884-AMENDED DOCUMENTS [04-10-2024(online)].pdf | 2024-10-04 |
| 2 | 201941012884-FORM 1 [30-03-2019(online)].pdf | 2019-03-30 |
| 2 | 201941012884-PatentCertificate16-12-2024.pdf | 2024-12-16 |
| 3 | 201941012884-DRAWINGS [30-03-2019(online)].pdf | 2019-03-30 |
| 3 | 201941012884-FORM 13 [04-10-2024(online)].pdf | 2024-10-04 |
| 3 | 201941012884-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |
| 4 | 201941012884-POA [04-10-2024(online)].pdf | 2024-10-04 |
| 4 | 201941012884-FORM-26 [28-06-2019(online)].pdf | 2019-06-28 |
| 4 | 201941012884-AMENDED DOCUMENTS [04-10-2024(online)].pdf | 2024-10-04 |
| 5 | Correspondence by Agent_Power of Attorney, Annexure-A_08-07-2019.pdf | 2019-07-08 |
| 5 | 201941012884-FORM 13 [04-10-2024(online)].pdf | 2024-10-04 |
| 5 | 201941012884-Annexure [25-09-2024(online)].pdf | 2024-09-25 |
| 6 | 201941012884-Written submissions and relevant documents [25-09-2024(online)].pdf | 2024-09-25 |
| 6 | 201941012884-Proof of Right (MANDATORY) [08-07-2019(online)].pdf | 2019-07-08 |
| 6 | 201941012884-POA [04-10-2024(online)].pdf | 2024-10-04 |
| 7 | Correspondence by Agent _Form-1_15-07-2019.pdf | 2019-07-15 |
| 7 | 201941012884-Correspondence to notify the Controller [10-09-2024(online)].pdf | 2024-09-10 |
| 7 | 201941012884-Annexure [25-09-2024(online)].pdf | 2024-09-25 |
| 8 | 201941012884-FORM 3 [03-10-2019(online)].pdf | 2019-10-03 |
| 8 | 201941012884-FORM-26 [04-09-2024(online)].pdf | 2024-09-04 |
| 8 | 201941012884-Written submissions and relevant documents [25-09-2024(online)].pdf | 2024-09-25 |
| 9 | 201941012884-Correspondence to notify the Controller [10-09-2024(online)].pdf | 2024-09-10 |
| 9 | 201941012884-ENDORSEMENT BY INVENTORS [03-10-2019(online)].pdf | 2019-10-03 |
| 9 | 201941012884-US(14)-ExtendedHearingNotice-(HearingDate-12-09-2024)-1200.pdf | 2024-09-04 |
| 10 | 201941012884-Correspondence to notify the Controller [03-09-2024(online)].pdf | 2024-09-03 |
| 10 | 201941012884-DRAWING [03-10-2019(online)].pdf | 2019-10-03 |
| 10 | 201941012884-FORM-26 [04-09-2024(online)].pdf | 2024-09-04 |
| 11 | 201941012884-CORRESPONDENCE-OTHERS [03-10-2019(online)].pdf | 2019-10-03 |
| 11 | 201941012884-US(14)-ExtendedHearingNotice-(HearingDate-12-09-2024)-1200.pdf | 2024-09-04 |
| 11 | 201941012884-US(14)-HearingNotice-(HearingDate-05-09-2024).pdf | 2024-08-07 |
| 12 | 201941012884-CLAIMS [21-06-2022(online)].pdf | 2022-06-21 |
| 12 | 201941012884-COMPLETE SPECIFICATION [03-10-2019(online)].pdf | 2019-10-03 |
| 12 | 201941012884-Correspondence to notify the Controller [03-09-2024(online)].pdf | 2024-09-03 |
| 13 | 201941012884-US(14)-HearingNotice-(HearingDate-05-09-2024).pdf | 2024-08-07 |
| 13 | 201941012884-FORM 18 [09-11-2020(online)].pdf | 2020-11-09 |
| 13 | 201941012884-COMPLETE SPECIFICATION [21-06-2022(online)].pdf | 2022-06-21 |
| 14 | 201941012884-CLAIMS [21-06-2022(online)].pdf | 2022-06-21 |
| 14 | 201941012884-CORRESPONDENCE [21-06-2022(online)].pdf | 2022-06-21 |
| 14 | 201941012884-FER.pdf | 2021-12-22 |
| 15 | 201941012884-COMPLETE SPECIFICATION [21-06-2022(online)].pdf | 2022-06-21 |
| 15 | 201941012884-DRAWING [21-06-2022(online)].pdf | 2022-06-21 |
| 15 | 201941012884-FER_SER_REPLY [21-06-2022(online)].pdf | 2022-06-21 |
| 16 | 201941012884-CORRESPONDENCE [21-06-2022(online)].pdf | 2022-06-21 |
| 16 | 201941012884-DRAWING [21-06-2022(online)].pdf | 2022-06-21 |
| 16 | 201941012884-FER_SER_REPLY [21-06-2022(online)].pdf | 2022-06-21 |
| 17 | 201941012884-CORRESPONDENCE [21-06-2022(online)].pdf | 2022-06-21 |
| 17 | 201941012884-DRAWING [21-06-2022(online)].pdf | 2022-06-21 |
| 17 | 201941012884-FER.pdf | 2021-12-22 |
| 18 | 201941012884-COMPLETE SPECIFICATION [21-06-2022(online)].pdf | 2022-06-21 |
| 18 | 201941012884-FER_SER_REPLY [21-06-2022(online)].pdf | 2022-06-21 |
| 18 | 201941012884-FORM 18 [09-11-2020(online)].pdf | 2020-11-09 |
| 19 | 201941012884-CLAIMS [21-06-2022(online)].pdf | 2022-06-21 |
| 19 | 201941012884-COMPLETE SPECIFICATION [03-10-2019(online)].pdf | 2019-10-03 |
| 19 | 201941012884-FER.pdf | 2021-12-22 |
| 20 | 201941012884-CORRESPONDENCE-OTHERS [03-10-2019(online)].pdf | 2019-10-03 |
| 20 | 201941012884-FORM 18 [09-11-2020(online)].pdf | 2020-11-09 |
| 20 | 201941012884-US(14)-HearingNotice-(HearingDate-05-09-2024).pdf | 2024-08-07 |
| 21 | 201941012884-DRAWING [03-10-2019(online)].pdf | 2019-10-03 |
| 21 | 201941012884-Correspondence to notify the Controller [03-09-2024(online)].pdf | 2024-09-03 |
| 21 | 201941012884-COMPLETE SPECIFICATION [03-10-2019(online)].pdf | 2019-10-03 |
| 22 | 201941012884-CORRESPONDENCE-OTHERS [03-10-2019(online)].pdf | 2019-10-03 |
| 22 | 201941012884-ENDORSEMENT BY INVENTORS [03-10-2019(online)].pdf | 2019-10-03 |
| 22 | 201941012884-US(14)-ExtendedHearingNotice-(HearingDate-12-09-2024)-1200.pdf | 2024-09-04 |
| 23 | 201941012884-DRAWING [03-10-2019(online)].pdf | 2019-10-03 |
| 23 | 201941012884-FORM 3 [03-10-2019(online)].pdf | 2019-10-03 |
| 23 | 201941012884-FORM-26 [04-09-2024(online)].pdf | 2024-09-04 |
| 24 | Correspondence by Agent _Form-1_15-07-2019.pdf | 2019-07-15 |
| 24 | 201941012884-ENDORSEMENT BY INVENTORS [03-10-2019(online)].pdf | 2019-10-03 |
| 24 | 201941012884-Correspondence to notify the Controller [10-09-2024(online)].pdf | 2024-09-10 |
| 25 | 201941012884-FORM 3 [03-10-2019(online)].pdf | 2019-10-03 |
| 25 | 201941012884-Proof of Right (MANDATORY) [08-07-2019(online)].pdf | 2019-07-08 |
| 25 | 201941012884-Written submissions and relevant documents [25-09-2024(online)].pdf | 2024-09-25 |
| 26 | 201941012884-Annexure [25-09-2024(online)].pdf | 2024-09-25 |
| 26 | Correspondence by Agent _Form-1_15-07-2019.pdf | 2019-07-15 |
| 26 | Correspondence by Agent_Power of Attorney, Annexure-A_08-07-2019.pdf | 2019-07-08 |
| 27 | 201941012884-FORM-26 [28-06-2019(online)].pdf | 2019-06-28 |
| 27 | 201941012884-POA [04-10-2024(online)].pdf | 2024-10-04 |
| 27 | 201941012884-Proof of Right (MANDATORY) [08-07-2019(online)].pdf | 2019-07-08 |
| 28 | 201941012884-DRAWINGS [30-03-2019(online)].pdf | 2019-03-30 |
| 28 | 201941012884-FORM 13 [04-10-2024(online)].pdf | 2024-10-04 |
| 28 | Correspondence by Agent_Power of Attorney, Annexure-A_08-07-2019.pdf | 2019-07-08 |
| 29 | 201941012884-AMENDED DOCUMENTS [04-10-2024(online)].pdf | 2024-10-04 |
| 29 | 201941012884-FORM 1 [30-03-2019(online)].pdf | 2019-03-30 |
| 29 | 201941012884-FORM-26 [28-06-2019(online)].pdf | 2019-06-28 |
| 30 | 201941012884-DRAWINGS [30-03-2019(online)].pdf | 2019-03-30 |
| 30 | 201941012884-PROVISIONAL SPECIFICATION [30-03-2019(online)].pdf | 2019-03-30 |
| 30 | 201941012884-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |
| 31 | 201941012884-PatentCertificate16-12-2024.pdf | 2024-12-16 |
| 31 | 201941012884-FORM 1 [30-03-2019(online)].pdf | 2019-03-30 |
| 32 | 201941012884-PROVISIONAL SPECIFICATION [30-03-2019(online)].pdf | 2019-03-30 |
| 32 | 201941012884-IntimationOfGrant16-12-2024.pdf | 2024-12-16 |
| 1 | SearchHistoryE_22-12-2021.pdf |