Sign In to Follow Application
View All Documents & Correspondence

System And Method For Video De Fogging

Abstract: The present disclosure relates to a system (100) for de-fogging video, the system includes a processor (104) operatively coupled to an image capture device (102), the processor configured to receive one or more frames of a physical scene, analyze at least one frame of the one or more frames to extract de-fogging parameters of the at least one frame, acquire a de-fogged frame by removing fog from at least one frame using a de-fogging filter, compare the extracted de-fogging parameters of the at least one frame with consecutive frames to determine correlation, wherein, based on the determination of correlation of the at least one frame with the consecutive frames, the processor is configured to remove fog from the consecutive frames to facilitate in increasing speed of de-fogging the one or more frames.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
02 July 2021
Publication Number
08/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
info@khuranaandkhurana.com
Parent Application

Applicants

Chitkara Innovation Incubator Foundation
SCO: 160-161, Sector - 9c, Madhya Marg, Chandigarh- 160009, India.

Inventors

1. KANSAL, Isha
Chitkara University Institute of Engineering and Technology, Chitkara University, Chandigarh-Patiala National Highway, Village Jansla, Rajpura, Punjab - 140401, India.
2. POPLI, Renu
Chitkara University Institute of Engineering and Technology, Chitkara University, Chandigarh-Patiala National Highway, Village Jansla, Rajpura, Punjab - 140401, India.
3. GOYAL, Nitin
Chitkara University Institute of Engineering and Technology, Chitkara University, Chandigarh-Patiala National Highway, Village Jansla, Rajpura, Punjab - 140401, India.
4. GEETANJALI
Chitkara University Institute of Engineering and Technology, Chitkara University, Chandigarh-Patiala National Highway, Village Jansla, Rajpura, Punjab - 140401, India.
5. PRASAD, Devender
Chitkara University Institute of Engineering and Technology, Chitkara University, Chandigarh-Patiala National Highway, Village Jansla, Rajpura, Punjab - 140401, India.
6. SINGH, Dinesh Kumar
Computer Science and Engineering Department, Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Sonipat, Haryana - 131039, India.
7. GARG, Kanwal
Department of Computer Science & Applications, Kurukshetra University, Kurukshetra - 136118, Haryana, India.

Specification

The present disclosure relates, in general, to image processing, and more
specifically, relates to a system and method for de-fogging video.
BACKGROUND
[0002] The visibility and colour of an image are usually degraded by the fog in
the atmosphere. The videos captured in foggy weather suffers from poor vision and
low quality. Extracting information from such videos is very difficult. It is usually
needed to improve the quality of images and videos captured in the weather by de-
fogging. The process of removing the fog in an image or video is called de-fogging.
[0003] Currently, there are many image de-fogging methods know in the art,
however, these existing methods suffer from limitations of flickering that may occur in videos when there is a scene change, estimating the de-fogging parameters independently for each frame is computationally expensive and lack of preserving fine texture details. The defect of the existing method lies in the existing filters that cannot obtain the de-fogged image/video of good effect.
[0004] Therefore, there is a need in the art to provide a means that can improve
the quality and speed of the de-fogged image/video.
OBJECTS OF THE PRESENT DISCLOSURE
[0005] An object of the present disclosure relates, in general, to image
processing, and more specifically, relates to a system and method for de-fogging video.
[0006] Another obj ect of the present disclosure is to provide a system that enables
faster way of recovering foggy videos, the information from the previous frames is directly used to de-fog the consecutive frames.
[0007] Another object of the present disclosure is to provide a system that
enhances the de-fogging performance in terms of spatial and temporal coherence, and reduces the execution time.

[0008] Another object of the present disclosure is to provide a system that
reduces computational complexity.
[0009] Another object of the present disclosure is to provide a system to improve
frame rate.
[0010] Yet another object of the present disclosure is to provide a system that
provides edge preservation property of gradient domain guided filter (GDGF), which
produces better quality de-fogged images.
SUMMARY
[0011] The present disclosure relates, in general, to image processing, and more
specifically, relates to a system and method for de-fogging video.
[0012] In an aspect, the present disclosure provides a system for de-fogging one
or more frames, the system includes an image capture device configured to capture the one or more frames of a physical scene, the one or more frames pertaining to foggy frames, a processor operatively coupled to the image capture device, the processor coupled with a memory, said memory storing instructions executable by the processor to receive, from the image capture device, one or more frames of the physical scene analyze at least one frame of the one or more frames to extract de-fogging parameters of the at least one frame, acquire a de-fogged frame by removing fog from at least one frame of the one or more frames using a de-fogging filter, compare the extracted de-fogging parameters of the at least one frame with consecutive frames to determine correlation of the at least one frame of the one or more frames with the consecutive frames, wherein, based on the determination of correlation of the at least one frame with the consecutive frames, the processor is configured to remove fog from the consecutive frames to facilitate in increasing speed of de-fogging the one or more frames.
[0013] According to an embodiment, the de-fogging parameters pertaining to any
or a combination of intensity values, atmosphere light values and transmission values.

[0014] According to an embodiment, the processor configured to extract the de-
fogging parameters from each of the consecutive frames, upon the determination of
uncorrelated set of parameters of the at least one frame with the consecutive frames.
[0015] According to an embodiment, the processor configured to acquire de-
fogged frames by removing fog from the consecutive frames.
[0016] According to an embodiment, the processor configured to convert the de-
fogged frames into a video.
[0017] According to an embodiment, the de-fogging filter is a gradient domain
guided filter (GDGF).
[0018] According to an embodiment, the GDGF filter applies edge attentive
parameters on the one or more frames.
[0019] According to an embodiment, the processor operatively coupled to a
learning engine, the learning engine is trained to detect the de-fogging parameters in the one or more frames.
[0020] According to an embodiment, the image capture device comprises camera
and any combination thereof.
[0021] In an aspect, the present disclosure provides a method for de-fogging, the
method includes obtaining, at an image capture device, one or more frames of a physical scene, the one or more frames pertaining to foggy frames, receiving, at a computing device, from the image capture device, one or more frames of the physical scene, analyzing, at the computing device, at least one frame of the one or more frames to extract de-fogging parameters of the at least one frame, acquiring, at the computing device, a de-fogged frame by removing fog from at least one frame of the one or more frames using a de-fogging filter; and comparing, at the computing device, the extracted de-fogging parameters of the at least one frame with consecutive frames to determine correlation of the at least one frame of the one or more frames with the consecutive frames, wherein, based on the determination of correlation of the at least one frame a with the consecutive frames, the computing device is configured to remove fog from

the consecutive frames to facilitate in increasing speed of de-fogging the one or more
frames.
[0022] Various objects, features, aspects, and advantages of the inventive subject
matter will become more apparent from the following detailed description of preferred
embodiments, along with the accompanying drawing figures in which like numerals
represent like components.
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] The following drawings form part of the present specification and are
included to further illustrate aspects of the present disclosure. The disclosure may be better understood by reference to the drawings in combination with the detailed description of the specific embodiments presented herein.
[0024] FIG. 1 illustrate an exemplary representation of system for video de-
fogging, in accordance with an embodiment of the present disclosure.
[0025] FIG. 2 illustrate an exemplary representation of method for video de-
fogging, in accordance with an embodiment of the present disclosure.
[0026] FIG. 3 illustrates an exemplary computer system in which or with which
embodiments of the present invention can be utilized in accordance with embodiments of the present disclosure.
DETAILED DESCRIPTION
[0027] The following is a detailed description of embodiments of the disclosure
depicted in the accompanying drawings. The embodiments are in such detail as to
clearly communicate the disclosure. If the specification states a component or feature
"may", "can", "could", or "might" be included or have a characteristic, that particular
component or feature is not required to be included or have the characteristic.
[0028] As used in the description herein and throughout the claims that follow,
the meaning of "a," "an," and "the" includes plural reference unless the context clearly

dictates otherwise. Also, as used in the description herein, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise.
[0029] The present disclosure relates, in general, to image processing, and more
specifically, relates to a system and method for de-fogging video. The present disclosure relates to a smart device that can capture video as an input, convert the input video to frames, efficiently applies a de-fogging algorithm and provides the de-fogged video frames. The present disclosure can be described in enabling detail in the following examples, which may represent more than one embodiment of the present disclosure.
[0030] FIG. 1 illustrate an exemplary representation of system for video de-
fogging, in accordance with an embodiment of the present disclosure.
[0031] Referring to FIG. 1, system 100 configured to de-fog video frames, the
system 100 can include an image capture device 102, a processor 104, and a memory 106. The image capture device 102 can capture video as an input, the processor 104 can convert the input video to frames, applies the de-fogging algorithm efficiently and provides the de-fogged video frames. The present disclosure provides a fast Gradient Domain Guided Filter (GDGF) to enable the de-fogging process faster and visually appealing.
[0032] In an exemplary embodiment, the image capture device 102 can be a
camera. The image capture device 102 configured to capture the one or more frames (also referred to as frames, herein) of a physical scene, the frames pertaining to foggy frames. For example, the foggy video can be captured from a camera during foggy weather or taking already existing foggy video.
[0033] In an embodiment, the processor 104 operatively coupled to the image
capture device 102, the processor 104 coupled with the memory 106, the memory 106 storing instructions executable by the processor 106 to receive, from the image capture device 102, the one or more frames of the physical scene. The processor 104 can convert video to frames and store it to the storage device/memory 106. Processor 104 can pick the foggy video frames from storage device 106 and apply a first algorithm

which can internally call a second algorithm to convert foggy frames to de-fogged
frames. Finally, processor 104 can convert de-fogged frames to de-fogged video.
[0034] The processor 104 can analyze at least one frame of one or more frames
to extract de-fogging parameters of the at least one frame. The de-fogging parameters pertaining to any or a combination of intensity values, atmosphere light values and transmission values. The processor 104 configured to acquire a de-fogged frame by removing fog from at least one frame of the one or more frames using a de-fogging filter. The processor 104 can compare the extracted de-fogging parameters of at least one frame with consecutive frames to determine the correlation of the at least one frame of the one or more frames with the consecutive frames, where based on the determination of correlation of the extracted de-fogging parameters of the at least one frame with the consecutive frames, the processor 104 is configured to remove the fog from the consecutive frames to increase the speed of de-fogging the one or more frames.
[0035] In another embodiment, the processor 104 configured to extract the de-
fogging parameters from each of the consecutive frames, upon determination of an uncorrelated set of parameters of at least one frame with the consecutive frames. The processor 104 configured to acquire de-fogged frames by removing fog from the consecutive frames. The processor 104 configured to convert the de-fogged frames into a de-fogged video.
[0036] In another embodiment, the de-fogging filter can be a gradient domain
guided filter (GDGF), where the GDGF filter applies edge attentive parameters on the
one or more frames. The processor 104 operatively coupled to a learning engine, the
learning engine is trained to detect the de-fogging parameters in one or more frames.
[0037] For example, first video frame is extracted and considered as a reference
frame. Parameters are calculated using reference frame. The histogram based correlation is checked between upcoming and the reference frames. If they are correlated, the previous parameters are used to de-fog upcoming frames, if they are

uncorrected, new parameters are estimated for the upcoming frames. This improves
the spatial/temporal coherence of video and computational complexity.
[0038] In an implementation, the basic model of image de-fogging include:
/c(x) =Lc0(x)xt + LcAx(l-t) (1)
where x denotes the image spatial coordinate. ce{R, G, B} (Red, Green, Blue) color channels of an image.
Lo is the image which is to be recovered using foggy image I is the fog degraded image
P represents the scattering coefficient corresponding to a particular medium (P =1.0 is considered in this work).
d denotes the depth or distance of an image point at x with respect to the camera. LCA is the atmospheric light which is considered as constant throughout the image, t = e~Pxdref(x\ represents the transmission map which is inversely related to d. By using foggy image /c, we can recover fog-free image Lc0 with the help of above equation. Therefore, our main task is to find out parameters d or t, LCA using foggy image Ic.
[0039] Algorithm 1: Algorithm for de-fogging video frames are as follows:
Pick the first video frame Fn and obtain the histogram Hi by using: Hn(rj = numy) .
where nurrij is the number of pixels, n is the nth frame, r, is the/'1 intensity level. Acknowledge Fn as the background image asBgkinitially. Address k=\. With the help of Bgk, find initial and refined depth values dt and d \ respectively. Also obtain atmospheric light LcAk by using this video frame with the help of Algorithm 2. For n = 2toN, repeat the below steps:
a. Pick nth frame of the foggy video and find its histogram H„.
b. Find corn,k between histograms of the current H„ and the background frames Hk
by using [2].
5^l=1(Hi(m)-hj)(Hi-i(m)-hj_1)
(2)
COrrH.M._x jn =i(H.(m)_ft.)2 J]ni=l(Hj_1(m)-hj_1)2

where, n is the number of grey levels.
c. corn,k is then rounded off to three decimal places. If the value of corn,k = 1, set its
value to corn,k = 0.999.
d. Obtain current frame n parameters dg „, LCA" as:
i. Compare corn,k and corn-i,k values.
ii. When coefficient values are same at 1st decimal place, (for example 0.996, 0.997), then directly move to next step (iii). If not (for example 0.896, 0.997), then n and k frames are uncorrelated. For this case, find initial, refined depth and global atmospheric light is obtained by using current frame n. Revise the background image frame by Bgk = Fr„. Set k = n, dak = da „, d\ = d'„ and LCA = LCA". Obtain de-fogged frame Fdn from (5). Go to step d.i. iii. Comparing correlation at second decimal place, in case they are also same (for example 0.996, 0.997), move to next step (iii). If not (for example 0.996, 0.957), then both frames are acknowledged as weakly correlated frames. For this, the previous transmission map is considered as such i.e. dg„ = dak and d'n is obtained by applying guided filter on da „ using current frame Fr„ as the reference image. For this case, LCA" = LCA. Obtain de-fogged image Fdn by using Fdn from (5). Go back to step d.i. iv. Whether the third decimal place of correlation coefficients are same or not (for example 0.996, 0.997), as these are the least significant digits, both the frames are treated as correlated. For this case, parameters are required not to be estimated for nth frame. Directly perform, dg„ = dak, d'„ = d\ and LCA" = LCA. Obtain de-fogged video frame Fdn from (5).
[0040] Algorithm 2: Algorithm for de-fogging single image or video frame.
i. Obtain the brightness V and saturation components S from the RGB foggy image /c,
where ce{R, G, B} .
To calculate V component follow the procedure:

TR IC
First identify the normalized Red, Green and Blue values (0 to 1, typically 1 /255,1 1255
and/B/255).
Identify the maximum (max) and minimum (min) of these.
Calculate Luminosity (V) = (max - min). If max = 0 then the Color is black and
Luminosity is zero.
To calculate S component follow the procedure:
• First identify the normalized Red, Green and Blue values (0 to 1, typically
/R/255,/G/255and/B/255)
• Identify the maximum (max) and minimum (min) of these.
• If max = 0 then the Color is black and Saturation is zero.
• Calculate the Luminosity (V) as (max - min).
• The Luminosity of a pixel is the range between the minimum and maximum values of Red, Green and Blue.
• If Luminosity is less than 0.5 then Saturation = (max - min) / (max + min)
• If Luminosity is greater than 0.5 then Saturation = (max - min) / (2 - max - min)
ii. Find the initial depth map d with CAP technique [3] from V and S, by putting
9o= 0:121779 ; 0i= 0:959710 ; 02= 0:780245 and o2 = 0:041337
in d(x) = 0o+0i*V(x) + 02*S(x) + eo(x) (3)
0o, 0i, and 02 are the linear coefficients. eo(x) denotes the random normal error of the
model (3) having standard deviation o.
iii. Obtain window based depth map dg from d by using the procedure described as
follows
• An initial depth map d obtained using above having size M X N, is divided into
non overlapping sb blocks of fixed size (Bl, B2.., Bi Bsb). Here size of blocks
is taken as 5X5.
• Then, a down sampled depth image dds is created in such a way that each pixel in
dds is obtained from each block Bi by taking the minimum in the respective block
as

dds(x) = min(Bi); i = 1; 2; 3,... sb
• The number of pixels in dds is equal to sb. Here min specifies the mathematical operation to find the minimum depth value in a block Bi.
• After obtaining dds, its window based dark image d^rk is estimated as
ddsrk(x) = minyex(dds(y)) (4) Here is the window of size 3X3.
• After this, nearest neighbor up sampling of d^rk is performed to obtain the
equivalent window based depth map da
iv. Now, this depth map will be further refined by using Gradient domain guided Filter (GDGF) [4] to obtain d'.
v. After obtaining refined depth map, transmission is obtained using t = e~Pxd (xwhere /? = 1.0
vi. The top 25% rows of depth map dg are extracted. The values of R, G, B color channels
in foggy image Ic corresponding to top 0.1% brightest pixels in dg are selected as the
value of global atmospheric light LCA.
vii. Obtain the de-fogged image Lc0 from /c, LCA and t by using
Lc0 (x) = LCA + min{max{tjo_1}} (5)
[0041] For example, the foggy video is either captured from camera during foggy
weather or taking already existing foggy video. Then frames are extracted from video. Initially, first frame is considered and the de-fogging parameters in Equation. 1 are calculated and this frame is de-fogged using second algorithm. This frame is treated as the reference frame. Then next two frames are picked and the correlation with the reference frame is found. If they are correlated, the parameters in Equation. 1 are directly picked from the reference frames and just the de-fogging is performed by using Equation. 5, otherwise parameters are calculated again and the reference frame is updated. All the frames are then de-fogged and finally de-fogged frames are converted to video.

[0042] The embodiments of the present disclosure described above provide
several advantages. The one or more of the embodiments provides the system 100 that enables faster way of recovering foggy videos, the information from the previous frames is directly used to de-fog the consecutive frames. The system 100 enhances the de-fogging performance in terms of spatial and temporal coherence, reduces the execution time and reduces computational complexity. The present disclosure can improve frame rate and can provide edge preservation property of GDGF, which produces better quality de-fogged images.
[0043] FIG. 2 illustrate an exemplary representation of method for video de-
fogging, in accordance with an embodiment of the present disclosure.
[0044] The method 200 can be implemented using a computing device, which
can include one or more processors 104. The method 200 incudes, at block 202, the image capture device can obtain one or more frames of a physical scene, the one or more frames pertaining to foggy frames.
[0045] At block 204, the computing device can receive from the image capture
device, one or more frames of the physical scene. At block 206, the computing device can analyze at least one frame of the one or more frames to extract de-fogging parameters of the at least one frame.
[0046] At block 208, the computing device can acquire a de-fogged frame by
removing fog from at least one frame of the one or more frames using a de-fogging filter. At block 210 the computing device can compare the extracted de-fogging parameters of the at least one frame with consecutive frames to determine correlation of the at least one frame of the one or more frames with the consecutive frames, wherein, based on the determination of correlation od the at least one frame with the consecutive frames, the computing device is configured to use the extracted de-fogging parameters to remove fog from the consecutive frames to increase the speed of de-fogging the one or more frames.

[0047] FIG. 3 illustrates an exemplary computer system in which or with which
embodiments of the present invention can be utilized in accordance with embodiments of the present disclosure.
[0048] As shown in FIG. 3, computer system 300 includes an external storage
device 310, a bus 320, a main memory 330, a read only memory 340, a mass storage device 350, communication port 360, and a processor 370. A person skilled in the art will appreciate that computer system may include more than one processor and communication ports. Examples of processor 370 include, but are not limited to, an Intel® Itanium® or Itanium 2 processor(s), or AMD® Opteron® or Athlon MP® processor(s), Motorola® lines of processors, FortiSOC™ system on a chip processors or other future processors. Processor 370 may include various units associated with embodiments of the present invention. Communication port 360 can be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fibre, a serial port, a parallel port, or other existing or future ports. Communication port 360 may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which computer system connects.
[0049] Memory 330 can be Random Access Memory (RAM), or any other
dynamic storage device commonly known in the art. Read only memory 340 can be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or BIOS instructions for processor 370. Mass storage 350 may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), e.g. those available from Seagate (e.g., the Seagate Barracuda 7200 family) or Hitachi (e.g., the Hitachi Deskstar 7K1000), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g. an array of disks (e.g.,

SATA arrays), available from various vendors including Dot Hill Systems Corp.,
LaCie, Nexsan Technologies, Inc. and Enhance Technology, Inc.
[0050] Bus 320 communicatively couples processor(s) 370 with the other
memory, storage, and communication blocks. Bus 320 can be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects processor 370 to software system.
[0051] Optionally, operator and administrative interfaces, e.g. a display,
keyboard, and a cursor control device, may also be coupled to bus 320 to support direct operator interaction with computer system. Other operator and administrative interfaces can be provided through network connections connected through communication port 360. External storage device 310 can be any kind of external hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Video Disk - Read Only Memory (DVD-ROM). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
[0052] It will be apparent to those skilled in the art that the system 100 of the
disclosure may be provided using some or all of the mentioned features and components without departing from the scope of the present disclosure. While various embodiments of the present disclosure have been illustrated and described herein, it will be clear that the disclosure is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the scope of the disclosure, as described in the claims.

ADVANTAGES OF THE PRESENT DISCLOSURE
[0053] The present disclosure provides a system that enables faster way of
recovering foggy videos, the information from the previous frames is directly used to
de-fog the consecutive frames.
[0054] The present disclosure provides a system that enhances the de-fogging
performance in terms of spatial and temporal coherence, and reduces the execution
time.
[0055] The present disclosure provides a system that reduces computational
complexity.
[0056] The present disclosure provides a system to improve frame rate.
[0057] The present disclosure provides a system that provides edge preservation
property of GDGF, which produces better quality de-fogged images.

We Claim:

1. A system (100) for de-fogging video, the system comprising:
an image capture device (102) configured to capture the one or more frames of a physical scene, the one or more frames pertaining to foggy frames; and
a processor (104) operatively coupled to the image capture device (102), the processor (104) coupled with a memory, said memory storing instructions executable by the processor to:
receive, from the image capture device (102), one or more frames of the physical scene;
analyze at least one frame of the one or more frames to extract de-fogging parameters of the at least one frame;
acquire a de-fogged frame by removing fog from at least one frame of the one or more frames using a de-fogging filter; and
compare the extracted de-fogging parameters of the at least one frame with consecutive frames to determine correlation of the at least one frame of the one or more frames with the consecutive frames,
wherein, based on the determination of correlation of the at least one frame with the consecutive frames, the processor (104) is configured to remove fog from the consecutive frames to increase the speed of de-fogging the one or more frames.
2. The system as claimed in claim 1, wherein the de-fogging parameters pertaining to any or a combination of intensity values, atmosphere light values and transmission values.
3. The system as claimed in claim 1, wherein the processor configured to extract the de-fogging parameters from each of the consecutive frames, upon the

determination of uncorrected set of parameters of the at least one frame with the consecutive frames.
4. The system as claimed in claim 3, wherein the processor configured to acquire de-fogged frames by removing fog from the consecutive frames.
5. The system as claimed in claim 1, wherein the processor configured to convert the de-fogged frames into a video.
6. The system as claimed in claim 1, wherein the de-fogging filter is a gradient domain guided filter (GDGF).
7. The system as claimed in claim 6, wherein the GDGF filter applies edge attentive parameters on the one or more frames.
8. The system as claimed in claim 1, wherein the processor operatively coupled to a learning engine, the learning engine is trained to detect the de-fogging parameters in the one or more frames.
9. The system as claimed in claim 1, wherein the image capture device comprises camera and any combination thereof.
10. A method (200) for de-fogging video, the method comprising:
obtaining (202), at an image capture device, one or more frames of a physical scene, the one or more frames pertaining to foggy frames;
receiving (204), at a computing device, from the image capture device, one or more frames of the physical scene;
analyzing (206), at the computing device, at least one frame of the one or more frames to extract de-fogging parameters of the at least one frame;

acquiring (208), at the computing device, a de-fogged frame by removing fog from at least one frame of the one or more frames using a de-fogging filter; and
comparing (210), at the computing device, the extracted de-fogging parameters of the at least one frame with consecutive frames to determine correlation of the at least one frame of the one or more frames with the consecutive frames, wherein, based on the determination of correlation of the at least one frame with the consecutive frames, the computing device is configured to remove (212) fog from the consecutive frames to increase speed of de-fogging the one or more frames.

Documents

Application Documents

# Name Date
1 202111029841-STATEMENT OF UNDERTAKING (FORM 3) [02-07-2021(online)].pdf 2021-07-02
2 202111029841-POWER OF AUTHORITY [02-07-2021(online)].pdf 2021-07-02
3 202111029841-FORM FOR STARTUP [02-07-2021(online)].pdf 2021-07-02
4 202111029841-FORM FOR SMALL ENTITY(FORM-28) [02-07-2021(online)].pdf 2021-07-02
5 202111029841-FORM 1 [02-07-2021(online)].pdf 2021-07-02
6 202111029841-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [02-07-2021(online)].pdf 2021-07-02
7 202111029841-EVIDENCE FOR REGISTRATION UNDER SSI [02-07-2021(online)].pdf 2021-07-02
8 202111029841-DRAWINGS [02-07-2021(online)].pdf 2021-07-02
9 202111029841-DRAWINGS [02-07-2021(online)]-2.pdf 2021-07-02
10 202111029841-DRAWINGS [02-07-2021(online)]-1.pdf 2021-07-02
11 202111029841-DECLARATION OF INVENTORSHIP (FORM 5) [02-07-2021(online)].pdf 2021-07-02
12 202111029841-COMPLETE SPECIFICATION [02-07-2021(online)].pdf 2021-07-02
13 202111029841-FORM 18 [08-05-2023(online)].pdf 2023-05-08
14 202111029841-FER.pdf 2024-07-02
15 202111029841-FORM-5 [02-01-2025(online)].pdf 2025-01-02
16 202111029841-FORM-26 [02-01-2025(online)].pdf 2025-01-02
17 202111029841-FER_SER_REPLY [02-01-2025(online)].pdf 2025-01-02
18 202111029841-DRAWING [02-01-2025(online)].pdf 2025-01-02
19 202111029841-CORRESPONDENCE [02-01-2025(online)].pdf 2025-01-02

Search Strategy

1 9841E_20-12-2023.pdf
2 202111029841_SearchStrategyAmended_E_SearchHistoryAE_11-11-2025.pdf