Abstract: A method and system for image processing for correcting reflection region in an image is disclosed. In an embodiment, the method may include receiving a plurality of images of a subject, wherein each of the plurality of images is obtained in an associated lighting condition of a plurality of lighting conditions. The method may further include selecting an input image from the plurality of images and generating a mask image using the input image. The method may further include selecting a set of images from the plurality of images other than the input image, performing a first blending operation on the set of images to generate a blended image, and combining the mask image and the blended image using a second blending operation to generate a reflection-corrected image.
Claims:1. A method of image processing, the method comprising:
receiving, by an image processing device, a plurality of images of a subject, wherein each of the plurality of images is obtained in an associated lighting condition of a plurality of lighting conditions,
selecting, by the image processing device, an input image from the plurality of images;
generating, by the image processing device, a mask image using the input image, wherein the mask image defines a reflection region associated with the input image;
selecting, by the image processing device, a set of images from the plurality of images other than the input image, wherein the set of images comprises a predetermined number of images from the plurality of images;
performing, by the image processing device, a first blending operation on the set of images to generate a blended image; and
combining, by the image processing device, the mask image and the blended image using a second blending operation to generate a reflection-corrected image.
2. The method as claimed in claim 1, wherein generating the mask image comprises:
identifying high saturated pixels associated with the input image to generate a first output image, wherein the first output defines the high saturated pixels;
identifying high contrasted pixels associated with the input image to generate a second output image, wherein the second output defines the high contrasted pixels; and
combining the first output image and the second output image to generate the mask image.
3. The method as claimed in claim 1, wherein identifying the high saturated pixels comprises:
separating a luminance component and a color component associated with each pixel of the input image, wherein the separating comprises:
converting (Red, Green, Blue) RGB color configuration associated with the input image into HSV color configuration, wherein the HSV color configuration defines a saturation (S) plane and a brightness value (V) plane;
for each pixel associated with the input image, comparing:
a pixel value in the S plane with a first threshold value; and
a pixel value in the V plane with a second threshold value; and
determining the pixel to be associated with the reflected region, based on the comparison.
4. The method as claimed in claim 1, wherein identifying the high contrasted pixels comprises:
converting RGB color configuration associated with the input image into Greyscale color configuration to generate the second output image.
5. The method as claimed in claim 1 further comprising:
performing a noise cancellation on the mask, using a blur algorithm.
6. The method as claimed in claim 1, wherein performing the first blending operation on the set of images comprises:
creating a set of pairs of images from the set of images;
converting RGB color configuration associated with each image of each pair of images into LAB configuration, wherein the LAB configuration defines a perceptual lightness (L) value, and an A value and a B value corresponding to associated colors;
performing a blending operation based on the L value associated with each image of each pair of images to determine a blended output (Lout) value;
determining a difference between the Lout value and the L value associated with each image of each pair of images, to determine difference values corresponding to the pair of images;
selecting a lower difference value of the difference values corresponding to the pair of images;
generating a third output image corresponding to each pair of images, using the A value and the B associated with the image of each pair of images corresponding to the lower difference value, wherein the third output image is in LAB configuration;
combining a plurality of third output images corresponding to the plurality of pairs of images, by applying a minimum model, to generate a fourth output image, wherein the fourth output image is in LAB configuration, and wherein applying the minimum model comprises:
comparing a plurality of L values associated with the plurality of third output images; and
generating a fourth output image by selecting a minimum L value among the plurality of L values and the A value and the B value corresponding to the minimum L value; and
converting the LAB configuration associated with the fourth output image into RGB color configuration, to generate the blended image.
7. The method as claimed in claim 1, wherein the second blending operation is based on Poisson blending technique.
8. The method as claimed in claim 1, wherein each image of each pair of images is blended using Alpha blending technique.
9. A image processing device comprising:
a processor; and
a memory communicatively coupled to the processor, wherein the memory stores a plurality of processor-executable instructions, which, upon execution, cause the processor to:
receive a plurality of images of a subject, wherein each of the plurality of images is obtained in an associated lighting condition of a plurality of lighting conditions,
select an input image from the plurality of images;
generate a mask image using the input image, wherein the mask image defines a reflection region associated with the input image;
select a set of images from the plurality of images other than the input image, wherein the set of images comprises a predetermined number of images from the plurality of images;
perform a first blending operation on the set of images to generate a blended image; and
combine the mask image and the blended image using a second blending operation to generate a reflection-corrected image.
10. The image processing device as claimed in claim 9, wherein generating the mask image comprises:
identifying high saturated pixels associated with the input image to generate a first output image, wherein the first output defines the high saturated pixels;
identifying high contrasted pixels associated with the input image to generate a second output image, wherein the second output defines the high contrasted pixels; and
combining the first output image and the second output image to generate the mask image.
11. The image processing device as claimed in claim 10, wherein identifying the high saturated pixels comprises:
separating a luminance component and a color component associated with each pixel of the input image, wherein the separating comprises:
converting (Red, Green, Blue) RGB color configuration associated with the input image into HSV color configuration, wherein the HSV color configuration defines a saturation (S) plane and a brightness value (V) plane;
for each pixel associated with the input image, comparing:
a pixel value in the S plane with a first threshold value; and
a pixel value in the V plane with a second threshold value; and
determining the pixel to be associated with the reflected region, based on the comparison.
12. The image processing device as claimed in claim 10, wherein identifying the high contrasted pixels comprises:
converting RGB color configuration associated with the input image into Greyscale color configuration to generate the second output image.
13. The image processing device as claimed in claim 9, wherein performing the first blending operation on the set of images comprises:
creating a set of pairs of images from the set of images;
converting RGB color configuration associated with each image of each pair of images into LAB configuration, wherein the LAB configuration defines a perceptual lightness (L) value, and an “a” value and a "b” value corresponding to associated colors;
performing a blending operation based on the L value associated with each image of each pair of images to determine a blended output (Lout) value;
determining a difference between the Lout value and the L value associated with each image of each pair of images, to determine difference values corresponding to the pair of images;
selecting a lower difference value of the difference values corresponding to the pair of images;
generating a third output image corresponding to each pair of images, using the “a” value and the “b” associated with the image of each pair of images corresponding to the lower difference value, wherein the third output image is in LAB configuration;
combining a plurality of third output images corresponding to the plurality of pairs of images, by applying a minimum model, to generate a fourth output image, wherein the fourth output image is in LAB configuration, and wherein applying the minimum model comprises:
comparing a plurality of L values associated with the plurality of third output images; and
generating a fourth output image by selecting a minimum L value among the plurality of L values and the “a” value and the “b” corresponding to the minimum L value; and
converting the LAB configuration associated with the fourth output image into RGB color configuration, to generate the blended image.
14. The image processing device as claimed in claim 9,
wherein the second blending operation is based on Poisson blending technique; and
wherein each image of each pair of images is blended using Alpha blending technique.
, Description:Technical Field
[001] This disclosure relates generally to image processing, and more particularly to a method and system for removing undesirable reflection from microscopic images.
BACKGROUND
[002] Image, and in particularly microscopic image, of a subject with a reflective surface can include undesirable reflective glare that may obscure one or more portions in the image. The reflective glare can be a result of a light source(s) in the environment of the device or instrument (e.g., lamps) or can be a result of light produced by the device or instrument, such as due to optical settings of the microscopes for better visualization of material on a computer screen.
[003] However, if the subject is shiny or reflective, the acquired image may have a high reflection effect or be highly saturated in some areas. The highly saturated image region leads to loss of information, which is a matter of high criticality in determining quality analysis for industrial devices like Printed PCB boards, mechanical components, and so on. Therefore, the imaging information of a saturated pixel is required to be recovered to prevent the loss of information.
SUMMARY OF THE INVENTION
[004] In an embodiment, a method of image processing is disclosed. The method may include receiving a plurality of images of a subject, wherein each of the plurality of images is obtained in an associated lighting condition of a plurality of lighting conditions and selecting an input image from the plurality of images. The method may further include generating a mask image using the input image, wherein the mask image defines a reflection region associated with the input image and selecting a set of images from the plurality of images other than the input image. the set of images may include a predetermined number of images from the plurality of images. The method may further include performing a first blending operation on the set of images to generate a blended image and combining the mask image and the blended image using a second blending operation to generate a reflection-corrected image.
[005] In another embodiment, an image processing device for correcting reflection region in an image is disclosed. The image processing device includes a processor and a memory which stores a plurality of instructions. The plurality of instructions, upon execution by the processor, may cause the processor to receive a plurality of images of a subject, wherein each of the plurality of images is obtained in an associated lighting condition of a plurality of lighting conditions, and selecting an input image from the plurality of images. The plurality of instructions may further cause the processor to generate a mask image using the input image, wherein the mask image defines a reflection region associated with the input image, and select a set of images from the plurality of images other than the input image. The set of images may include a predetermined number of images from the plurality of images. The plurality of instructions may further cause the processor to perform a first blending operation on the set of images to generate a blended image and combine the mask image and the blended image using a second blending operation to generate a reflection-corrected image.
BRIEF DESCRIPTION OF THE DRAWINGS
[006] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
[007] FIG. 1A illustrates a block diagram of a system for correcting reflection region in an image, in accordance with an embodiment of the present disclosure.
[008] FIG. 1B illustrates a functional block diagram of an image processing device, in accordance with an embodiment.
[009] FIG. 2A-2B are snapshots of images of a subject having multiple reflective surfaces obtained under different lighting conditions.
[010] FIG. 3 illustrates snapshots of a lightning device in a plurality of lighting configurations and a plurality of images of a subject, in accordance with an embodiment.
[011] FIG. 4 is a process flow diagram of a process of generating the mask image using the input image, in accordance with some embodiments.
[012] FIG. 5 is a process flow diagram of a process of performing the first blending operation on the set of images, in accordance with some embodiments.
[013] FIG. 6 is a flowchart of a method of image processing for correcting reflection region in an image, in accordance with an embodiment.
[014] FIG. 7 is a flowchart of a method of generating the binary mask image, in accordance with an embodiment.
[015] FIG. 8 is a flowchart of a method of performing a first blending operation, in accordance with an embodiment.
DETAILED DESCRIPTION OF THE DRAWINGS
[016] Exemplary embodiments are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims. Additional illustrative embodiments are listed.
[017] The present disclosure provides for identifying reflected regions and then removal of reflected regions, using multiple images obtained different light settings. The present disclosure can be divided into three sections – the first using Alpha Blending Technique and Minimum Algorithm; the second relating to identification of the reflected regions to generate a mask image (also referred to as binary image) for a user-selected input image; and the third performing Poisson Blending technique using the user-selected input image, the Alpha blended image, and the mask image.
[018] Referring now to FIG. 1A, a block diagram of an exemplary system 100 for image processing is illustrated, in accordance with an embodiment of the present disclosure. The system 100 may include an image processing device 104. The image processing device 104 may be a computing device having data processing capability. In particular, the image processing device 104 may have the capability for correcting a reflection region of an image captured by an imaging device. Examples of the image processing device 104 may include, but are not limited to a desktop, a laptop, a notebook, a netbook, a tablet, a smartphone, a mobile phone, an application server, a web server, or the like
[019] Additionally, in some embodiments, the system 100 may include an imaging device 102. The imaging device 102 may generally include one or more cameras, such as a high-resolution camera. Further, in some embodiments, the imaging device 102 may be implemented in a microscope. The system 100 may further include a data storage 114. For example, the data storage 114 may store various types of data required by the image processing device 104 for correcting the reflection region in an image. For example, the data storage 114 may store one or more images captured by the imaging device 102.
[020] The imaging device 102 and the image processing device 104 may be communicatively coupled to the data storage 114 and the imaging device 102 via a communication network 112. The communication network 112 may be a wired or a wireless network and the examples may include, but are not limited to the Internet, Wireless Local Area Network (WLAN), Wi-Fi, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), and General Packet Radio Service (GPRS). Various devices in the system 100 may be configured to connect to the communication network 112, in accordance with various wired and wireless communication protocols. Examples of such wired and wireless communication protocols may include, but are not limited to, a Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Zig Bee, EDGE, IEEE 802.11, light fidelity (Li-Fi), 802.16, IEEE 802.11s, IEEE 802.11g, multi-hop communication, wireless access point (AP), device to device communication, cellular communication protocols, and Bluetooth (BT) communication protocols.
[021] The image processing device 104 may be configured to perform one or more functionalities that may include receiving a plurality of images of a subject. Each of the plurality of images may be obtained in an associated lighting condition of a plurality of lighting conditions. The one or more functionalities may further include selecting an input image from the plurality of images and generating a mask image using the input image. The one or more functionalities may further include selecting a set of images from the plurality of images other than the input image and performing a first blending operation on the set of images to generate a blended image. The one or more functionalities may further include combining the mask image and the blended image using a second blending operation to generate a reflection-corrected image.
[022] In order to perform the above-discussed functionalities, the image processing device 104 may include a processor 106, a memory 108, and an input/output device 110. The processor 106 may include suitable logic, circuitry, interfaces, and/or code that may be configured to correct the reflection region of an image. The processor 106 may be implemented based on temporal and spatial processor technologies, which may be known to one ordinarily skilled in the art. Examples of implementations of the processor 106 may be a Graphics Processing Unit (GPU), a Reduced Instruction Set Computing (RISC) processor, an Application-Specific Integrated Circuit (ASIC) processor, a Complex Instruction Set Computing (CISC) processor, a microcontroller, Artificial Intelligence (AI) accelerator chips, a co-processor, a central processing unit (CPU), and/or a combination thereof. The memory 108 may include suitable logic, circuitry, and/or interfaces that may be configured to store instructions executable by the processor 106. The memory 108 may store instructions that, when executed by the processor 106, may cause the processor 106 to correct the reflection region of an image. The memory 108 may be a non-volatile memory or a volatile memory. Examples of non-volatile memory may include, but are not limited to a flash memory, a Read-Only Memory (ROM), a Programmable ROM (PROM), Erasable PROM (EPROM), and Electrically EPROM (EEPROM) memory. Examples of volatile memory may include but are not limited to Dynamic Random-Access Memory (DRAM), and Static Random-Access memory (SRAM). The memory 108 may also store various data that may be captured, processed, and/or required by the system 100.
[023] The image processing device 104 may further include the input/output device 110. Examples may include, but are not limited to a display, keypad, microphone, audio speakers, vibrating motor, LED lights, etc. The I/O devices 110 may receive input from a user and also display an output of the computation performed by the processor 106. For example, the user input may include the plurality of images of a subject in which each of the plurality of images may be obtained in an associated lighting condition of a plurality of lighting conditions. Further, the display of the I/O device 110 may include a display screen that is capable of displaying the reflection-free image.
[024] Additionally, the image processing device 104 may be communicatively coupled to an external device 116 for sending and receiving various data. Examples of the external device 116 may include, but are not limited to, a remote server, digital devices, and a computer system. The image processing device 104 may connect to the external device 116 over the communication network 112. The image processing device 104 may connect to external device 116 via a wired connection, for example via Universal Serial Bus (USB). A computing device, a smartphone, a mobile device, a laptop, a smartwatch, a personal digital assistant (PDA), an e-reader, and a tablet are all examples of external devices 116.
[025] Referring now to FIG. 1B, a functional block diagram of the image processing device 104 is illustrated, in accordance with an embodiment of the present disclosure. As mentioned above, the image processing device 104 may be configured to perform image processing to correct a reflection region in the captured in the image. The image processing device 104 may include an image receiving module 122, a selection module 124, a mask image generating module 126, a first blending operation performing module 128, and a combining module 130.
[026] It should be noted that all such aforementioned modules 122-130 may be represented as a single module or a combination of different modules. Further, as will be appreciated by those skilled in the art, each of the modules 122-130 may reside, in whole or in parts, on one device or multiple devices in communication with each other. In some embodiments, each of the modules 122-130 may be implemented as dedicated hardware circuit comprising custom application-specific integrated circuit (ASIC) or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. Each of the modules 122-130 may also be implemented in a programmable hardware device such as a field programmable gate array (FPGA), programmable array logic, programmable logic device, and so forth. Alternatively, each of the modules 122-130 may be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, include one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, function, or other construct. Nevertheless, the executables of an identified module or component need not be physically located together, but may include disparate instructions stored in different locations which, when joined logically together, include the module and achieve the stated purpose of the module. Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different applications, and across several memory devices.
[027] The image receiving module 122 may receive a plurality of images of a subject. It should be note that each of the plurality of images may be obtained in an associated lighting condition of a plurality of lighting conditions. Further, each of the plurality of lighting conditions is unique. By way of an example, the plurality of images may be obtained using the imaging device 102 implemented in a microscope. It should be further noted that due to the lighting and presence of a reflective surface on the subject, sometimes the image obtained may include a reflection region. This highly saturated image region leads to loss of information and makes analyzing the images difficult. The reflection region is basically a highly saturated region in the image, as is explained in conjunction with Figs 2A-2B.
[028] Referring now to Figs 2A-2B, snapshots of images 200A and 200B of a subject having multiple reflective surfaces obtained under different lighting conditions are illustrated, in accordance with some embodiments. For example, the images 200A and 200B may be obtained via a microscope under lighting generated by ring lights. As shown in FIG. 2A, under a first lighting condition, the image 200A of the subject includes a reflection region 202 corresponding to a first reflective surface. Further, as shown in FIG. 2B, under a second lighting condition, the image 200B of the subject includes a reflection region 204 corresponding to a second reflective surface. However, as it will be appreciated, in the image 200B, the reflection region 202 (captured in the image 200A) is minimally present or is absent. Similarly, in the image 200A, the reflection region 204 (captured in the image 200B) is minimally present or is absent. As will be further appreciated, the reflection regions 202, 204 are highly saturated regions that lead to loss of information, as the actual surface of the subject is not captured in the image. Therefore, when multiple images of the subject are obtained under different lighting conditions, each of the multiple images may include different reflection regions. The obtaining of the plurality of images under plurality of lighting conditions is further explained in conjunction with FIG. 3.
[029] Referring now to FIG. 3, snapshots of a lightning device in a plurality of lighting configurations 302-1 to 302-11 and a plurality of images 304-1 to 304-11 of a subject (a printed circuit board (PCB)) are illustrated, in accordance with some embodiments. The plurality of images are obtained under lighting conditions with fixed Intensity values (RL Light Intensity = 80 and CXI Intensity = 20), using a microscope providing eleven ring light settings. As shown in FIG. 3, the lighting device may include a plurality of light sources (e.g. light emitting diodes (LEDs)) arranged along a ring. Further, the lighting device may be configured in a plurality of lighting configurations to create a plurality of lighting conditions, for example by selectively activating a plurality of light sources of the lighting device. Under each of these lighting conditions, an image of the subject may be obtained. In some embodiments, as shown in FIG. 3, the lighting device may be configured in eleven lighting configurations to obtain eleven different images of the subject corresponding to the eleven lighting configurations. For example, a first image 304-1 of the subject is obtained in a first lighting configuration 302-1 of the lighting device (i.e. a first lighting condition). Similarly, a second image 304-2 of the subject is obtained in a second lighting configuration 302-2 of the lighting device (i.e. a second lighting condition). As such, eleven images 304-1 to 304-11 of the subject are obtained in eleven different lighting configurations 302-1 to 302-11of the lighting device. As will be appreciated, each of the eleven images 304-1 to 304-11 includes different reflection regions due to the different lighting conditions and presence of multiple reflective surfaces on the subject. As such, every image has some information which is not present in the other. So, to regain the entire information, a set of images (e.g. six images) may be used as input to Alpha blending, as will be discussed later.
[030] he selection module 124 may select an input image from the plurality of images, for example, from the eleven images 304-1 to 304-11. It should be noted that the input image may be selected either manually or using an automation application. Further, the input image may be selected randomly.
[031] The mask image generating module 126 may generate a mask image using the input image. The mask image defines a reflection region associated with the input image. In particularly, the mask image highlights the reflection region which is captured in the input image, corresponding to the lighting condition. The process of generating the mask image is further explained in conjunction with FIG. 4.
[032] Referring now to FIG. 4, a process flow diagram of a process 400 of generating the mask image using the input image is illustrated in accordance with some embodiments. First, the mask image generating module 126 may identify high saturated pixels associated with a selected input image. Based on the high saturated pixels identified, the mask image generating module 126 may generate a first output image corresponding to the input image. As such, the first output defines the high saturated pixels.
[033] In order to identify the high saturated pixels associated with the selected input image, at step 402, the mask image generating module 126 may receive the input image. Further, at step 404, the mask image generating module 126 may separate a luminance component and a color component associated with each pixel of the input image. By way of an example, separating the luminance component and the color component may include converting RGB (Red, Green, Blue) color configuration associated with the input image into HSV (Hue, Saturation, Value) color configuration. As will be appreciated by those skilled in the art, the HSV color configuration defines a Saturation (S) plane and a Brightness Value (V) plane.
[034] Further, for each pixel associated with the input image, the mask image generating module 126 may compare a pixel value in the S plane with a first threshold value (SThreshold value), and compare a pixel value in the V plane with a second threshold value (VThreshold value). The mask image generating module 126 may further determine the pixel to be associated with the reflected region, based on the comparison. In particular, if the pixel value in the S plane is less than the SThreshold value and if the pixel value in the V plane is greater than the VThreshold value, the mask image generating module 126 may considered the pixel as reflected region. Based on such pixels, the mask image generating module 126 may generate a first output image with saturated pixels, at step 406.
[035] Second, the mask image generating module 126 may identify high contrasted pixels associated with the input image. Based on the high contrasted pixels, the mask image generating module 126 may generate a second output image. The second output thus defines the high contrasted pixels. In order to identify the high contrasted pixels associated with the selected input image, at step 408, the mask image generating module 126 may convert RGB color configuration associated with the input image into Greyscale color configuration. As a result, a second output image with high contrasted pixels may be obtained, at step 410.
[036] Further, in some embodiments, at step 412, the mask image generating module 126 may combine the first output image (of step 406) and the second output image (of step 410) to obtain a preliminary mask image. At step 414, a post processing operation may be performed on the preliminary mask image. By way of an example, the post processing operation may include performing a noise cancellation on the preliminary mask image, using a Gaussian blur algorithm. Based on the post processing operation, a final mask image may be obtained at step 416.
[037] Returning to FIG. 1B, once the mask image is generated, the selection module 124 may further select a set of images from the plurality of images other than the input image. The set of images comprises a predetermined number of images from the plurality of images. For example, the set of images may include six images selected from the eleven images 314-1 to 314-11. It should be noted that the set of images may be selected either manually or using an automation application. Further, the set of images may be selected randomly.
[038] The first blending operation performing module 128 may perform a first blending operation on the set of images to generate a blended image. The first blending operation is further explained in conjunction with FIG. 5.
[039] Referring now to FIG. 5, a process flow diagram of a process 500 of performing the first blending operation on the set of images is illustrated, in accordance with some embodiments. In order to perform the first blending operation, a set of pairs of images may be created from the set of images. For example, as shown in FIG. 5, three pairs of images may be created from the eleven images 304-1 to 304-11 – a first pair including images 304-2, 304-11, a second pair including images 304-3, 304-10, and a third pair including images 304-4, 304-5.
[040] In order to perform the first blending operation, the first blending operation performing module 128 may convert converting RGB color configuration associated with each image of each pair of images into LAB configuration. As will be appreciated, the LAB configuration defines a perceptual lightness (L) value, and an “a” value and a "b” value corresponding to associated colors. As such, at step 504A, RGB color configuration associated with images 304-2, 304-11 of the first pair of images may be converted into LAB configuration. Similarly, at step 504B, RGB color configuration associated with images 304-3, 304-10 of the second pair of images may be converted into LAB configuration. Further, at step 504C, RGB color configuration associated with images 304-4, 304-5 of the third pair of images may be converted into LAB configuration.
[041] Further, at step 506A, 506B, 506C the first blending operation performing module 128 may perform a blending operation based on the L value associated with each image of each pair of images to determine a blended output (Lout) value. For example, the LAB configuration associated with the images of a pair of images may be L1A1B1 and L2A2B2. Therefore, the blending operation may be performed based on the L1 value and the L2 value associated with each image of the pair of images, to thereby determine the Lout value. For example, LAB values associated with the image 304-2 is L2A2B2. As such, at step 506A, the blending operation may be performed on the L value associated with the images 304-2, 304-11 of the first pair of images. In other words, the blending operation may be performed on the L2 value, L11 value associated with the images 304-2, 304-11, respectively (of the first pair of images), to determine the Lout1 value. Similarly, at step 506B, the blending operation may be performed L3 value, L10 value associated with the images 304-3, 304-10, respectively (of the second pair of images), to determine the Lout2 value. Further, at step 506C, the blending operation may be performed on L4 value, L5 value associated with the images 304-4, 304-5, respectively (of the third pair of images), to determine the Lout3 value.
[042] Further, the first blending operation performing module 128 may determine a difference between the Lout value and the L value associated with each image of each pair of images, to determine difference values corresponding to the pair of images. As such, for the first pair of images, a difference between the Lout1 value and the L2 may be determined to thereby determine a first difference value Diff1 corresponding to the first pair of images. Similarly, a difference between the Lout1 value and the L1 may be determined to thereby determine a second difference value Diff2 corresponding to the first pair of images.
[043] Further, the first blending operation performing module 128 may select a lower difference value of the difference values corresponding to the pair of images. As such, a lower difference value among the first difference value Diff1 and the second difference value Diff2 may be selected, corresponding to the first pair of images.
[044] At step 508A, 508B, 508C, the first blending operation performing module 128 may further generate a third output image corresponding to each pair of images, using the “A” value and the “B” associated with the image of each pair of images corresponding to the lower difference value. As will be understood, the third output image is in LAB configuration. Therefore, in the above example, a third output image corresponding to the first pair of images may be generated, using the “A” value and the “B” associated with the images 304-2, 304-11 corresponding to the lower difference value. Assuming that the first difference value Diff1 is lower as compared to the second difference value Diff2, the third output image corresponding to the first pair of images may be generated using the A2 value and the B2 value associated with the image 304-2. Therefore, the LAB configuration of the output image corresponding to the first pair of images may be referred to as LAB(1).
[045] Using the above methodology, further, the output image corresponding to the second pair of images 304-3, 304-10 and third pair of images 304-4, 304-5 may be generated. As such, the LAB configuration of the output image corresponding to the second pair of images may be referred to as LAB(2), and the LAB configuration of the output image corresponding to the third pair of images may be referred to as LAB(3).
[046] Once the third output image corresponding to each pair of images is generated, at step 510, the first blending operation performing module 128 may further combine a plurality of third output images corresponding to the plurality of pairs of images. In some embodiments, the plurality of third output images corresponding to the plurality of pairs of images may be combined by applying a minimum model, to generate a fourth output image. The fourth output image may be in LAB configuration. Therefore, in the above example, the third output images corresponding to the first of pairs of images, the second pair of images, and the third pair of images combined by applying the minimum model, to generate a fourth output image.
[047] In some embodiments, applying the minimum model may include comparing a plurality of L values associated with the plurality of third output images and generating a fourth output image by selecting a minimum L value among the plurality of L values and the “A” value and the “B” corresponding to the minimum L value. As such, in the above example, the L values of the LAB configurations LAB(1), LAB(2), LAB(3) associated with the third output images corresponding to the first pair, the second pair, and the third pair of images may be compared. Assuming that the L value of the LAB configuration LAB(1) associated with the third output image corresponding to the first pair of images is the minimum, the fourth output image by generating by selecting the L value of the LAB configurations LAB(1) and the A value and the B of the LAB configurations LAB(1).
[048] At step 512, the first blending operation performing module 128 may further convert the LAB configuration associated with the fourth output image into RGB color configuration, to generate the blended image. Therefore, the LAB configurations LAB(1) with the fourth output image may be converted in RGB color configuration, to generate the blended image.
[049] Returning to FIG. 1B, the combining module 130 may combine the mask image (generated by the mask image generating module 126) and the blended image (generated by the first blending operation performing module 128) using a second blending operation. In response to the second blending operation, a reflection-corrected image is generated. In some embodiments, the second blending operation may be based on Poisson blending technique.
[050] Referring now to FIG. 6, a flowchart of a method 600 of image processing for correcting reflection region in an image is illustrated, in accordance with an embodiment of the present disclosure. By way of an example, the method 600 may be performed by the image processing device 104.
[051] At step 602, a plurality of images (for example, images 304-1 to 304-11) of a subject may be received. Each of the plurality of images may be obtained in an associated lighting condition of a plurality of lighting conditions. Further, each of the plurality of lighting conditions may be unique. At step 804, an input image may be selected from the plurality of images. It should be noted that the input image may be selected either manually or using an automation application. Further, the input image may be selected randomly. At step 606, a mask image may be generated using the input image. the mask image defines a reflection region associated with the input image. The step 606 of generating the mask image is further explained in conjunction with FIG. 7.
[052] Referring now to FIG. 7, a flowchart of a method 700 of generating the mask image is illustrated, in accordance with an embodiment of the present disclosure. By way of an example, the method 700 may be performed by the mask image generating module 126.
[053] At step 702, high saturated pixels associated with the input image maybe identified to generate a first output image. The first output defines the high saturated pixels. In some embodiments, the step 702 of identifying the high saturated pixels associated with the input image may further include steps 702A-702C. At step 702A, a luminance component and a color component associated with each pixel of the input image may be separated. In some embodiments, separating the luminance component and the color component may include converting (Red, Green, Blue) RGB color configuration associated with the input image into HSV color configuration. As will be appreciated. the HSV color configuration defines a saturation (S) plane and a brightness value (V) plane. At step 702B, for each pixel associated with the input image, a pixel value in the S plane may be compared with a first threshold value. Further, at step 702B, a pixel value in the V plane with may be compared with a second threshold value. At step 702C, the pixel to be associated with the reflected region may be determined, based on the comparison, to thereby generate a first output image.
[054] At step 704, high contrasted pixels associated with the input image may be identified to generate a second output image. The second output defines the high contrasted pixels. In some embodiments, the step 704 of identifying the high contrasted pixels associated with the input image may further include step 704A. At step 704A, RGB color configuration associated with the input image may be converted into Greyscale color configuration to generate the second output image. At step 706, the first output image and the second output image may be combined to generate the mask image.
[055] Returning to FIG. 6, at step 608, a set of images may be selected from the plurality of images other than the input image. The set of images may include a predetermined number of images from the plurality of images. For example, the set of images may include six images – image 304-2, 304-11, 304-3, 304-10, 304-4, and 304-5. At step 610, a first blending operation may be performed on the set of images to generate a blended image. The step 610 of performing the first blending operation is further explained in conjunction with FIG. 8.
[056] Referring now to FIG. 8, a flowchart of a method 800 of performing the first blending operation is illustrated, in accordance with an embodiment of the present disclosure. By way of an example, the method 800 may be performed by the first blending operation performing module 128.
[057] At step 802, a set of pairs of images may be created from the set of images. For example, a set of three pairs of images may be created from the set of images – a first pair including images 304-2, 304-11, a second pair including images 304-3, 304-10, and a third pair including images 304-4, 304-5. At step 804, RGB color configuration associated with each image of each pair of images may be converted into LAB configuration. The LAB configuration defines a perceptual lightness (L) value, and an A value and a B value corresponding to associated colors. At step 806, a blending operation may be performed based on the L value associated with each image of each pair of images to determine a blended output (Lout) value.
[058] At step 808, a difference between the Lout value and the L value associated with each image of each pair of images may be determined, to further determine difference values corresponding to the pair of images. At step 810, a lower difference value of the difference values corresponding to the pair of images may be selected. At step 812, a third output image corresponding to each pair of images may be generated, using the A value and the B value associated with the image of each pair of images corresponding to the lower difference value. The third output image may be in LAB configuration.
[059] At step 814, a plurality of third output images corresponding to the plurality of pairs of images may be combined, by applying a minimum model, to generate a fourth output image. The fourth output image may be in LAB configuration. In some embodiments, applying the minimum model includes steps 814A-814B. At step 814A, a plurality of L values associated with the plurality of third output images may be compared. At step 814B, a fourth output image may be generated by selecting a minimum L value among the plurality of L values and the A value and the B value corresponding to the minimum L value. At step 816, the LAB configuration associated with the fourth output image may be converted into RGB color configuration, to generate the blended image.
[060] One or more image processing techniques for correcting reflection region in an image are disclosed. The one or more techniques provide for a simple, effective, and efficient solution for removing reflection regions from microscopic images. The above techniques effectively regain the image information of saturated pixels of an input image to provide an effective result. Further, the techniques can be easily integrated with available microscopic software. Moreover, with minimum algorithm fine tuning, the techniques can be used for other industrial microscopes having similar hardware lighting settings.
[061] It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.
| # | Name | Date |
|---|---|---|
| 1 | 202241015568-STATEMENT OF UNDERTAKING (FORM 3) [21-03-2022(online)].pdf | 2022-03-21 |
| 2 | 202241015568-POWER OF AUTHORITY [21-03-2022(online)].pdf | 2022-03-21 |
| 3 | 202241015568-FORM 1 [21-03-2022(online)].pdf | 2022-03-21 |
| 4 | 202241015568-FIGURE OF ABSTRACT [21-03-2022(online)].jpg | 2022-03-21 |
| 5 | 202241015568-DRAWINGS [21-03-2022(online)].pdf | 2022-03-21 |
| 6 | 202241015568-DECLARATION OF INVENTORSHIP (FORM 5) [21-03-2022(online)].pdf | 2022-03-21 |
| 7 | 202241015568-COMPLETE SPECIFICATION [21-03-2022(online)].pdf | 2022-03-21 |
| 8 | 202241015568-Proof of Right [22-03-2022(online)].pdf | 2022-03-22 |
| 9 | 202241015568-Form18_Examination Request_13-10-2022.pdf | 2022-10-13 |
| 10 | 202241015568-Correspondence_Form18_13-10-2022.pdf | 2022-10-13 |
| 11 | 202241015568-FER.pdf | 2025-03-18 |
| 12 | 202241015568-FORM 3 [02-04-2025(online)].pdf | 2025-04-02 |
| 13 | 202241015568-OTHERS [16-06-2025(online)].pdf | 2025-06-16 |
| 14 | 202241015568-FORM-26 [16-06-2025(online)].pdf | 2025-06-16 |
| 15 | 202241015568-FER_SER_REPLY [16-06-2025(online)].pdf | 2025-06-16 |
| 16 | 202241015568-DRAWING [16-06-2025(online)].pdf | 2025-06-16 |
| 17 | 202241015568-COMPLETE SPECIFICATION [16-06-2025(online)].pdf | 2025-06-16 |
| 18 | 202241015568-CLAIMS [16-06-2025(online)].pdf | 2025-06-16 |
| 1 | SearchStrategyMatrix202241015568E_26-06-2024.pdf |