Abstract: ABSTRACT METHOD AND APPARATUS FOR PROCESSING A FOGGY IMAGE The present invention describes a method and apparatus for processing a foggy image. According to one embodiment, the foggy image having Red, green, and blue (RGB) values are converted into Hue, saturation and Intensity (HSV) color space values. From the HSV color space, a saturation map is generated. Further, a min max map in a Hue, Saturation, Intensity (HSV) color space is generated. From the min max map, an airlight information is calculated. After obtaining the saturation map and airlight information, a transmission map is determined. In one embodiment, three transmission maps are generated using three different block sizes viz. (3x3, 5x5 and 7x7). Accordingly, radiance maps are generated for these different block sizes and merged using a bilateral temporal filter to provide a fog-free image. Figure 1
DESC:FORM 2
THE PATENTS ACT, 1970
[39 of 1970]
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(Section 10; Rule 13)
METHOD AND APPARATUS FOR PROCESSING A FOGGY IMAGE
SAMSUNG R&D INSTITUTE INDIA – BANGALORE Pvt. Ltd.
# 2870, ORION Building, Bagmane Constellation Business Park,
Outer Ring Road, Doddanekundi Circle,
Marathahalli Post,
Bangalore -560037, Karnataka, India
Indian Company
The following Specification particularly describes the invention and the method it is being performed
FIELD OF THE INVENTION
The present invention generally relates to image processing, and more particularly relates to a method and apparatus for removing fog and haze from images to enhance image quality.
BACKGROUND OF THE INVENTION
Camera application is one of the most popular applications, which finds its use in mobile phones, lap-tops, computer systems, wrist watches, automobiles like car, aircraft, surveillance and tracking systems etc. In case of surveillance systems, cameras are installed in an open environment and hence weather conditions caused by rain, snow, fog, etc. affect qualities of images captured by these cameras. Especially, the fog and haze will be more prevalent in the open environment during winter season. Hence, the camera equipped in automobiles or in surveillance systems are unable to provide accurate picture information of outdoor scene to the user. The dense fog reduces the visibility of the image and degrades the quality of the preview of the image, thereby reducing the user experience. Eventually, many road accidents happen due to dense fog and haze, as the camera systems are incapable of providing accurate information to the user. In the worst scenarios, even flight take-off and landings gets delayed due to presence of dense fog.
A number of methods are reported in literature for reducing the haze/fog in the image including, but not limited to, hue preserving, local contrast enhancement, adaptive histogram equalization, CLAHE based, depth based method, transmission map based on ICA, methods based on multiple images and DCP in the neighboring pixels window and the like. The DCP in the neighboring pixels window helps to estimate air-light and transmission map followed by soft matting. Some of the existing methods adopt the following steps to remove fog in the image. The steps includes finding out the Dark Channel prior (DCP) based on RGB color space in a given image, finding out the fixed block size transmission map and refining the transmission map by applying soft matting.
However, these methods can remove the fog in an image to only some extent and can only slightly enhance the image quality. In view of the foregoing, there is a need for an improved method to remove fog and haze from an image or video or preview and to improve user experience.
The above mentioned shortcomings, disadvantages and problems are addressed herein and which will be understood by reading and studying the following specification.
SUMMARY OF THE INVENTION
Various embodiments herein describe a method for processing a foggy image. The method comprises of generating, by an image capturing device, a pixel depth image of a foggy image which includes fog by estimating pixel depth of a plurality of pixels of the foggy image, generating a min max map in a Hue, Saturation, Intensity (HSV) color space, calculating an air-light information from the min max map, extracting saturation information from the HSV color space, acquiring one or more transmission maps of the foggy image using the calculated air light and the saturation information, generating scene radiance images using one or more multi-sized windows, and merging the generated radiances images to obtain a defogged image.
According to one embodiment, the method further comprises of increasing contrast of the defogged image to enhance visibility of the captured image.
According to one embodiment, generating the pixel depth image of the foggy image comprises of calculating an improved Dark Channel Prior (DCP) information associated with the plurality of pixels of the foggy image being captured, obtaining an Red-Green-Blue (RGB) color values of the foggy image being captured based on a channel difference between at least two of red (R), green (G), and blue (B) channels of the foggy image, and converting the RGB values into the HSV color space.
According to one embodiment, calculating the air light comprises of generating air light for RGB values with different block sizes to obtain air light values for the corresponding block sizes.
According to one embodiment, the saturation information provides the color variation in the image and intensity of the plurality of pixels in the image.
According to one embodiment, the one or more transmission maps are of same size of the pixel depth image.
According to one embodiment, the one or more multi-sized windows are used to generate transmission maps having different block sizes.
According to one embodiment, the radiance images are merged using a temporal bilateral filtering to obtain the defogged image.
According to one embodiment, the method further comprises of selecting a region of interest from the foggy image based on a touch input from a user, and processing the selected region of interest to remove fog from the selected region of interest.
According to one embodiment, the method further comprises of enabling a user to navigate on the selected region of interest to remove fog from the user navigated area.
Various embodiments herein further comprises of an image capturing device for processing a foggy image. The image capturing device comprises of a modified dark channel prior calculating module for generating a pixel depth image of a foggy image which includes fog by estimating pixel depth of a plurality of pixels of the foggy image, generating a min max map in a Hue, Saturation, Intensity (HSV) color space, an airlight calculating module for calculating an air-light information from the min max map, extracting saturation information from the HSV color space, a transmission map calculating module for acquiring one or more transmission maps of the foggy image using the calculated air light and the saturation information, a scene radiance generating module for generating scene radiance images using one or more multi-sized windows, and a temporal bilateral filtering module for merging the generated radiances images to obtain a defogged image.
The foregoing has outlined, in general, the various aspects of the invention and is to serve as an aid to better understanding the more complete detailed description which is to follow. In reference to such, there is to be a clear understanding that the present invention is not limited to the method or application of use described and illustrated herein. It is intended that any other advantages and objects of the present invention that become apparent or obvious from the detailed description or illustrations contained herein are within the scope of the present invention.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The other objects, features and advantages will occur to those skilled in the art from the following description of the preferred embodiment and the accompanying drawings in which:
Figure 1 is a flow chart illustrating a method of processing a foggy image, according to an embodiment of the present invention.
Figure 2 is a schematic diagram illustrating a system overview in accordance with the embodiments of the present invention.
Figure 3 is a schematic block diagram illustrating an exemplary method of performing fog removal process on the selected ROI, according to one embodiment
Figure 4A and 4B illustrates a pictorial representation of comparing the results of present invention with other existing techniques, according to one embodiment.
Although specific features of the present invention are shown in some drawings and not in others, this is done for convenience only as each feature may be combined with any or all of the other features in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The various embodiments of the present invention disclose a method and apparatus for processing a foggy image. In the following detailed description of the embodiments of the invention, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
The specification may refer to “an”, “one” or “some” embodiment(s) in several locations. This does not necessarily imply that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms “includes”, “comprises”, “including” and/or “comprising” when used in this specification, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations and arrangements of one or more of the associated listed items.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The present invention mainly finds its application in the field of surveillance systems, but not limited to ship navigation, road monitoring, flight runway monitoring and the like. The surveillance systems get affected in case of bad weather and the image capturing device may capture the images with less quality. The factors affecting the image quality comprises at least one of fog, haze, smog, smoke, rain, and snow. All these factors are collectively called hereinafter as “fog”.
Figure 1 is a flow chart illustrating a method of processing a foggy image, according to one embodiment. At step 102, a pixel depth image of a foggy image which includes fog, is generated by estimating pixel depth of a plurality of pixels of the foggy image. At step 104, a min max map in a Hue, Saturation, Intensity (HSV) color space is generated. At step 106, an airlight information is calculated from the generated min max map. From the obtained values of both the airlight values and min max map a transmission map is calculated. Further, one or more transmission maps of the foggy image is acquired using the calculated airlight and saturation information at step 108.At step 110, scene radiance of the images using one or more multi-sized windows are generated, wherein the multi-size windows corresponds to 3x3, 5x5 and 7x7 block sizes respectively. Finally, at step 112, the generated radiances images are merged to obtain the defogged image.
Figure 2 is a schematic diagram illustrating one or more components of an image capturing device according to one embodiment of the present invention. As shown in Figure 2, the system comprises of a modified dark channel prior calculating module 202, an airlight calculating module 204, a transmission map generating module, a haze removing module 206, a temporal bilateral filtering module 210, and a contrast enhancement module 212. The image capturing device 202 comprises of other modules such as an image receiving module, a display unit and the like. A person having ordinary skill in the art understands the working of these modules and hence not explained in detail herein.
The modified dark channel prior (DCP) calculating module is configured for calculating modified dark channel of an input (Red, Green, and Blue) RGB foggy image in terms of (Hue, saturation and intensity) HSV color space. In the conventional DCP method, the fog is detected based on a key assumption that most of the local patches of a fog-free image in an outdoor scene contains some pixels whose intensity is very low in at least one color channel of ‘R’ or ‘G’ or ‘B’ channel. Using this, thickness/density of fog is estimated directly.
In one embodiment, the foggy image is represented as
I(??)=(??) ??(??)+??(1-??(??))
where ?? is the intensity of the captured image spoiled by haze, ?? is the original scene radiance or haze-free image, ?? is global Airlight, and ?? is the medium transmission describing the portion of the light that is not scattered and reaches the camera.
In the present invention, the input foggy image in the form of RGB image is first converted into HSV color space for calculating the modified dark channel prior.
JDark = V(1-S)
Where V is the intensity and S is the Saturation in HSV color space). The V (1-S) map is used to find the possible pixels having fog. Further, the V (1-S) map is applied on multilevel windows to refine the map for finding airlight. Also, the saturation value is obtained from the HSV color space for calculating transmission map.
The modified DCP method for accurate estimation of haze transmission is defined by
??????????= (I-(??????))*?????????{??,??,??}(?????????O(x) (????(??)))
where ???? is a color channel of ?? and O(x) is a local patch centered at x. From the dark prior, the air-light is calculated using airlight calculating module 204. The airlight calculating module 204 considers top 1% pixels and finds the max value pixel of ?????????? in the dark channel among the pixels. The ?????? (??????) is calculated based on normalized values of r, g & b color. The value of ?? at that pixel is considered as airlight for the foggy image. The airlight calculating module 204 transmits the calculated airlight value to transmission map generating module 206.
The transmission map generating module 206 is configured for generating different transmission maps using different block sizes viz. 3x3, 5x5 and 7x7. First, the transmission map generating module 206 combines both the saturation map and airlight values for calculating transmission maps. Since, the saturation of color in a foggy image decreases with the depth of an object, a saturation map is defined and using the saturation map, transmission map is created. The transmission map is given by the formula
t(??)=1-(??) ?????????{??,??,??} (?????????O(x) (????(??)/????))
where ??(??)=0.8 – (0.2 * S)
where ?? is saturation of the pixel and ranges from 0 to 1 i.e. (0=S=1). The objects which are far away have more fog and usually have less saturation. Hence, saturation value of the pixels is mapped to f (??). Therefore, if the saturation value is less, then f (??) is more and vice-versa.
It is to be noted that in order to obtain haze-free image, the original scene radiance has to be estimated accurately. The scene radiance generating module 208 is configured for removing haze from the input image. For removing haze, the original scene radiance is required and the scene radiance is calculated using the below formula.
J(??)= (??)-??/ max (??(??),??0) +??
Where t0 is a factor to restrict the noise and it is assumed to be 0.1. In one embodiment, the scene radiance generating module 208 uses three transmission maps with three different block sizes (3x3, 5x5 and 7x7) to generate scene radiance. The scene radiance generated using 7x7 produces more artifacts in the image along the edges as compared to 3x3 blocks size. On the other hand, noise produced using 7x7 blocks size is less when compared to 3x3. Thus, the scene radiance generating module 208 generates three scene radiance maps for the three different block sizes.
The temporal bilateral filtering unit 210 is configured for combining the generated radiance maps into a single resultant image. The temporal bilateral filtering unit 210 is further configured for providing the resultant scene radiance with a better quality radiance map. The filtering also reduces the noise and eliminates the need for soft matting.
The temporal bilateral filtering unit 210 calculates weighted averaging of pixels across the multiple frames of radiance images obtained from the scene radiance generating module 208. The temporal bilateral filtering unit 210 considers the radiance map corresponding to 3X3 block as a reference as it preserves most of the edge or gradient information in the image. The filtering is performed on intensity (??) of the radiance image using the below formula.
Where ?????? is the filtered output at (??,) pixel location, ??1 indicates block size, ???? is a constant and K is normalizing factor given by,
The value of ???? depends on the characteristics of the noise. As can be seen above, the normalizing factor shows that the weights depend on the deviation of the pixel from the pixel of reference image. If the deviation is more, the weight distribution is more and for similar pixels the weights become equal. Once the resultant haze-free image is obtained, the contrast of the image is enhanced using the contrast enhancement module 212. Since the noise component is reduced using the temporal bilateral filtering unit 210, the contrast may be enhanced significantly without enhancing the noise. The contrast is enhanced based on the following equation
where ?? is a threshold, ??(??,??) is the input intensity and ?? is the contrast enhancement factor to produce output ??(??,??). Based on the observation on a number of images we assumed ?? to be equal to 0.8 times the mean intensity of the image and ??=2.0 in the present invention.
The fog image processing method can be applied for video frames also. In one exemplary embodiment, a current frame (say ‘J frame’) is extracted. If fog is detected in the jth frame, then the frame is resized to image size of 640x480. Instead of fog, if rain or snow is detected, then the jth frame is resized to 320x240. Then, the processing of the frame follows same procedure as carried out for the still image.
The present invention can be performed in real time also. Hence, the present invention is used in real time scenarios such as for monitoring automobile, flights, mobile domain and the like. The present invention further improves the visibility of a scene or preview in real time and avoids the accidents and delays caused by dense fog.
In one embodiment, a user is allowed to select a region of interest (ROI) from the foggy image to remove fog from the selected ROI. Figure 3 is a schematic block diagram illustrating an exemplary method of performing fog removal process on the selected ROI, according to one embodiment. As shown in figure 3, once the input frame/image is received, the user is allowed to provide touch input to select the region of interest at step 4a. In other words, any object detection algorithms can be used to select the region of interest from the foggy image. Based on this, at step 4b, the selected ROI is processed to remove fog from the selected ROI. Finally at step 4c, the fog-free image is outputted. Similarly, the user is allowed to navigate his/her finger on the interested area. This is performed at step 3a. At step 3b, the fog removal algorithm is applied on the user navigated area. The user navigated area is processed to remove fog from the area and fog-free image is outputted at step 3c. The same is illustrated in Figure 3.
Figure 4 is a schematic diagram illustrating comparison of images before and after applying present invention, according to one embodiment. The fog or haze removal method is tested on a sample of 100 images and some of them are shown in Figure 3. As can be seen, the output images were enhanced significantly in terms of fog removal, saturation and contrast enhancement. As the generation of transmission map is based on the saturation component, the degradation of the color saturation of image is properly compensated at most of the places of the image.
Figure 4A and 4B illustrates comparison of results after processing foggy images using present invention along with other existing techniques, according to one embodiment. As shown in Figure 3A, first figure illustrates the original foggy image. Second figure of 3A illustrates the foggy image when viewed through an image capturing unit. Third figure illustrates the resultant image being obtained after processing the foggy image using “single image haze removal using dark channel prior. The last figure illustrates the resultant image obtained after processing the foggy image using the present invention. Hence, the fog/haze is removed to a greater extent in the image using the present invention. Another sample image illustrating the comparison of foggy image processing is shown in Figure 4B.
The processing time for processing the foggy image is compared with other existing techniques and results compared with the present invention and the existing arts are listed in the below table. From the table, the processing time for processing foggy image using the present invention is less when compared to the existing techniques.
Method Image size (Pixels) Platform Processing time (sec)
Tan [2] 600*400 Intel Core i7 3630QM 2.40 GHz 42.857
He et al. [8] 600*400 Intel Core i7 3630QM 2.40 GHz 2.142
Present method 600*400 Intel Core i7 3630QM 2.40 GHz 0.125
Although the invention of the method and system has been described in connection with the embodiments of the present invention illustrated in the accompanying drawings, it is not limited thereto. It will be apparent to those skilled in the art that various substitutions, modifications and changes may be made thereto without departing from the scope and spirit of the invention.
,CLAIMS:
We claim:
1. A method for processing a foggy image, the method comprises of:
generating, by an image capturing device, a pixel depth image of a foggy image which includes fog by estimating pixel depth of a plurality of pixels of the foggy image;
generating a min max map in a Hue, Saturation, Intensity (HSV) color space;
calculating an air-light information from the min max map;
extracting saturation information from the HSV color space;
acquiring one or more transmission maps of the foggy image using the calculated air light and the saturation information;
generating scene radiance images using one or more multi-sized windows; and
merging the generated radiances images to obtain a defogged image.
2. The method as claimed in claim 1, further comprising:
increasing contrast of the defogged image to enhance visibility of the captured image.
3. The method as claimed in claim 1, wherein generating the pixel depth image of the foggy image comprises of:
calculating an improved Dark Channel Prior (DCP) information associated with the plurality of pixels of the foggy image being captured;
obtaining an Red-Green-Blue (RGB) color values of the foggy image being captured based on a channel difference between at least two of red (R), green (G), and blue (B) channels of the foggy image; and
converting the RGB values into the HSV color space.
4. The method as claimed in claim 1, wherein calculating the air light comprises of:
generating air light for RGB values with different block sizes to obtain air light values for the corresponding block sizes.
5. The method as claimed in claim 1, wherein the saturation information provides the color variation in the image and intensity of the plurality of pixels in the image.
6. The method as claimed in claim 1, wherein the one or more transmission maps are of same size of the pixel depth image.
7. The method as claimed in claim 1, wherein the one or more multi-sized windows are used to generate transmission maps having different block sizes.
8. The method as claimed in claim 1, wherein the radiance images are merged using a temporal bilateral filtering to obtain the defogged image.
9. The method as claimed in claim 1, further comprising:
selecting a region of interest from the foggy image based on a touch input from a user; and
processing the selected region of interest to remove fog from the selected region of interest.
10. The method as claimed in claim 9, further comprising:
enabling a user to navigate on the selected region of interest to remove fog from the user navigated area.
11. An image capturing device for processing a foggy image, comprising:
a modified dark channel prior calculating module for generating a pixel depth image of a foggy image which includes fog by estimating pixel depth of a plurality of pixels of the foggy image;
generating a min max map in a Hue, Saturation, Intensity (HSV) color space;
an airlight calculating module for calculating an air-light information from the min max map;
extracting saturation information from the HSV color space;
a transmission map calculating module for acquiring one or more transmission maps of the foggy image using the calculated air light and the saturation information;
a scene radiance generating module for generating scene radiance images using one or more multi-sized windows; and
a temporal bilateral filtering module for merging the generated radiances images to obtain a defogged image.
12. The image capturing device further comprises of a contrast enhancement module for:
increasing contrast of the defogged image to enhance visibility of the captured image.
Dated this the 28th day of April 2016
Signature
KEERTHI J S
Patent Agent
Agent for the applicant
| # | Name | Date |
|---|---|---|
| 1 | 3724-CHE-2015-IntimationOfGrant02-12-2022.pdf | 2022-12-02 |
| 1 | SRIB-20150420-024_Provisional specification_Filed with IPO on 17th July 2015.pdf | 2015-07-23 |
| 2 | 3724-CHE-2015-PatentCertificate02-12-2022.pdf | 2022-12-02 |
| 2 | SRIB-20150420-024_Drawings_Filed with IPO on 17th July 2015.pdf | 2015-07-23 |
| 3 | POA_Samsung R&D Institute India-new.pdf | 2015-07-23 |
| 3 | 3724-CHE-2015-Written submissions and relevant documents [28-09-2022(online)].pdf | 2022-09-28 |
| 4 | OTHERS [02-05-2016(online)].pdf | 2016-05-02 |
| 4 | 3724-CHE-2015-Correspondence to notify the Controller [12-09-2022(online)].pdf | 2022-09-12 |
| 5 | Drawing [02-05-2016(online)].pdf | 2016-05-02 |
| 5 | 3724-CHE-2015-FORM-26 [12-09-2022(online)].pdf | 2022-09-12 |
| 6 | Description(Complete) [02-05-2016(online)].pdf | 2016-05-02 |
| 6 | 3724-CHE-2015-US(14)-HearingNotice-(HearingDate-13-09-2022).pdf | 2022-08-12 |
| 7 | 3724-CHE-2015-Power of Attorney-211215.pdf | 2016-06-13 |
| 7 | 3724-CHE-2015-FER.pdf | 2021-10-17 |
| 8 | 3724-CHE-2015-Form 1-211215.pdf | 2016-06-13 |
| 8 | 3724-CHE-2015-ABSTRACT [21-04-2021(online)].pdf | 2021-04-21 |
| 9 | 3724-CHE-2015-AMMENDED DOCUMENTS [21-04-2021(online)].pdf | 2021-04-21 |
| 9 | 3724-CHE-2015-Correspondence-F1-PA-211215.pdf | 2016-06-13 |
| 10 | 3724-CHE-2015-CLAIMS [21-04-2021(online)].pdf | 2021-04-21 |
| 10 | Form-2(Online).pdf | 2016-10-17 |
| 11 | 3724-CHE-2015-DRAWING [21-04-2021(online)].pdf | 2021-04-21 |
| 11 | 3724-CHE-2015-RELEVANT DOCUMENTS [17-07-2019(online)].pdf | 2019-07-17 |
| 12 | 3724-CHE-2015-FER_SER_REPLY [21-04-2021(online)].pdf | 2021-04-21 |
| 12 | 3724-CHE-2015-FORM 13 [17-07-2019(online)].pdf | 2019-07-17 |
| 13 | 3724-CHE-2015-AMENDED DOCUMENTS [17-07-2019(online)].pdf | 2019-07-17 |
| 13 | 3724-CHE-2015-FORM 13 [21-04-2021(online)].pdf | 2021-04-21 |
| 14 | 3724-CHE-2015-OTHERS [21-04-2021(online)].pdf | 2021-04-21 |
| 15 | 3724-CHE-2015-AMENDED DOCUMENTS [17-07-2019(online)].pdf | 2019-07-17 |
| 15 | 3724-CHE-2015-FORM 13 [21-04-2021(online)].pdf | 2021-04-21 |
| 16 | 3724-CHE-2015-FER_SER_REPLY [21-04-2021(online)].pdf | 2021-04-21 |
| 16 | 3724-CHE-2015-FORM 13 [17-07-2019(online)].pdf | 2019-07-17 |
| 17 | 3724-CHE-2015-RELEVANT DOCUMENTS [17-07-2019(online)].pdf | 2019-07-17 |
| 17 | 3724-CHE-2015-DRAWING [21-04-2021(online)].pdf | 2021-04-21 |
| 18 | Form-2(Online).pdf | 2016-10-17 |
| 18 | 3724-CHE-2015-CLAIMS [21-04-2021(online)].pdf | 2021-04-21 |
| 19 | 3724-CHE-2015-AMMENDED DOCUMENTS [21-04-2021(online)].pdf | 2021-04-21 |
| 19 | 3724-CHE-2015-Correspondence-F1-PA-211215.pdf | 2016-06-13 |
| 20 | 3724-CHE-2015-ABSTRACT [21-04-2021(online)].pdf | 2021-04-21 |
| 20 | 3724-CHE-2015-Form 1-211215.pdf | 2016-06-13 |
| 21 | 3724-CHE-2015-FER.pdf | 2021-10-17 |
| 21 | 3724-CHE-2015-Power of Attorney-211215.pdf | 2016-06-13 |
| 22 | 3724-CHE-2015-US(14)-HearingNotice-(HearingDate-13-09-2022).pdf | 2022-08-12 |
| 22 | Description(Complete) [02-05-2016(online)].pdf | 2016-05-02 |
| 23 | 3724-CHE-2015-FORM-26 [12-09-2022(online)].pdf | 2022-09-12 |
| 23 | Drawing [02-05-2016(online)].pdf | 2016-05-02 |
| 24 | 3724-CHE-2015-Correspondence to notify the Controller [12-09-2022(online)].pdf | 2022-09-12 |
| 24 | OTHERS [02-05-2016(online)].pdf | 2016-05-02 |
| 25 | POA_Samsung R&D Institute India-new.pdf | 2015-07-23 |
| 25 | 3724-CHE-2015-Written submissions and relevant documents [28-09-2022(online)].pdf | 2022-09-28 |
| 26 | SRIB-20150420-024_Drawings_Filed with IPO on 17th July 2015.pdf | 2015-07-23 |
| 26 | 3724-CHE-2015-PatentCertificate02-12-2022.pdf | 2022-12-02 |
| 27 | SRIB-20150420-024_Provisional specification_Filed with IPO on 17th July 2015.pdf | 2015-07-23 |
| 27 | 3724-CHE-2015-IntimationOfGrant02-12-2022.pdf | 2022-12-02 |
| 1 | 2020-10-1222-16-30E_13-10-2020.pdf |