Abstract: The present invention discloses method and system for enabling spatially varying auto focusing of one or more objects using an image capturing system. The method comprises focusing of objects in a region of interest by lenses, enabling spatially varying auto focusing of objects in the region of interest using a spatial light modulators (SLM), which are out of focus in the region of interest by the one or more lenses and capturing the focused and the auto focused objects in the region of interest by a camera sensor. The image capturing system comprises one or more lenses for focusing objects in a region of interest, the spatial light modulator enabling auto focusing of objects in the region of interest, which are out of focus in the region of interest by the one or more lenses and the camera sensor for capturing the focused and the auto focused one or more objects in the region of interest. Figure 2A & 2B
DESC:RELATED APPLICATION
Benefit is claimed to Indian Provisional Application No. 1542/CHE/2015 titled “A METHOD OF SPATIAL FOCUS CONTROL USING ELECTRO-OPTICS FOR CAMERA LENS SYSTEMS” filed on 25 March 2015, which is herein incorporated in its entirety by reference for all purposes.
FIELD OF INVENTION
The present invention relates to the field of image capturing and more particularly relates to method of enabling spatially varying auto focusing of one or more objects and an image capturing system thereof.
BACKGROUND OF THE INVENTION
Common drawbacks of fixed lens camera systems are image blurring and aberrations. Even for cameras with variable focus, the final step of focus adjustment typically involves mechanical motion of either lens or camera sensor. This causes delay and latency issues when imaging a fast changing scene. Moreover, a single focus setting is applied to entire scene which causes defocussing of objects that are at different distances from the camera. When such systems are used in line scan cameras for wafer inspection which are used for fast scanning and high magnification, it results in poor image quality which could be detrimental for defect identification and hence the quality and yield of wafers.
Some potential solutions for avoiding image blurring and aberrations include the use of light field or plenoptic cameras. Such cameras are enabled to record the light field information in a single shot. The recorded light field information is rendered based on the required focus and viewing direction. However, it requires large amount of post-processing and large working distance of approximately ~37cm. For applications such as wafer inspection the objective lens has high magnification and need working distance typically less than ~5cm. Also, the post processing hardware requirements cannot be met on portable devices such as smart phones or hand held cameras where the user wants to get a quick feedback of how the images or the videos look after being captured. Such limitations make use of plenoptic cameras incompatible with wafer inspection systems or portable devices. Likewise, other potential solutions suffer from non-reconfigurability of the lens elements or are incapable of providing multiple focal lengths for any given setting of the lens elements.
Hence, there exists need of method and system for enabling spatially varying auto focusing of one or more objects without mechanical motion of camera lenses and camera sensors.
SUMMARY
A method and system for enabling spatially varying auto focusing of one or more objects using an image capturing system is disclosed.
According to one aspect of present invention, the method of enabling spatially varying auto focusing of one or more objects using an image capturing system comprises focusing of objects in a region of interest by lenses, enabling spatially varying auto focusing of objects in the region of interest using spatial light modulators (SLM), which are out of focus in the region of interest by the one or more lenses and capturing the focused and the auto focused objects in the region of interest by a camera sensor.
As per another aspect of the image capturing system comprises lenses for focusing objects in a region of interest, the spatial light modulator enabling auto focusing of objects in the region of interest, which are out of focus in the region of interest by the one or more lenses and a camera sensor for capturing the focused and the auto focused one or more objects in the region of interest.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The aforementioned aspects and other features of the present invention will be explained in the following description, taken in conjunction with the accompanying drawings, wherein:
Figure 1 illustrates a schematic representation of a 2 dimensional spatial light modulator(SLM) made of material such as liquid crystals (LC) and Lithium Niobate (LiNbO3) and one or more segments of the SLM according to prior art.
Figure 2A illustrates a schematic representation of an image capturing system, according to one embodiment of present invention.
Figure 2B is a flow diagram illustrating a method of enabling spatially varying auto focusing of one or more objects using an image capturing system, according to one embodiment of present invention.
Figure 3 illustrates a system for wafer inspection process using line scan camera according to prior art.
Figure 4A illustrates a system for wafer inspection process using line scan camera employing SLM, according to one embodiment of present invention.
Figure 4B illustrates a method for wafer inspection process using line scan camera employing SLM, according to one embodiment of present invention.
Figure 5A is a schematic representation of process for generating Z map of a die, according to one embodiment of present invention.
Figure 5B is flow diagram illustrating the process of generating Z map of a die, according to one embodiment of present invention.
Figure 6A is an exemplary set up for performing calibration of SLM, according to one embodiment of present invention.
Figure 6B depicts a graph showing variation of SLM element phase-change in X direction for different values of the focal length of the SLM lens according to an exemplary embodiment of the present invention.
Figure 7A is a schematic representation of wafer inspection process of a die having various layers with different heights, according to one embodiment of present invention.
Figure 7B illustrates different regions of SLM corresponding to the die having various layers with different heights, according to one embodiment of present invention.
Figure 8 depicts a die image captured using the image capturing system according to one embodiment of present invention.
Figure 9A depicts a die image captured using the image capturing system, where the speed of SLM is matched with the camera sensor, according to one embodiment of present invention.
Figure 9B depicts a die image captured using the image capturing system, where the speed of SLM is 1/10th of the camera sensor, according to one embodiment of present invention.
Figure 10 illustrates an image capturing system with phase detection for auto focusing, according to prior art.
Figure 11A illustrates an image capturing system for auto focusing according to one embodiment of present invention.
Figure 11B illustrates a method of enabling auto focusing of one or more objects in the region of interest using the spatial light modulators (SLM) for motionless auto focus by a phase detection assembly, according to one embodiment of present invention.
Figure 12 illustrates an image capturing system for aberration correction, according to one embodiment of present invention.
Figure 13 is a flow diagram illustrating a method of enabling auto focusing of one or more objects in the region of interest using at least two spatial light modulators (SLM) for removing aberration, according to one embodiment of present invention.
Figure 14 is a flow diagram illustrating a method of capturing an aberration corrected image at real time, according to one embodiment of present invention.
Figure 15 is a flow diagram illustrating a method of enabling auto focusing of one or more objects in the region of interest using one or more spatial light modulators (SLM), for spatially auto focusing foveation spot, according to one embodiment of present invention.
DETAILED DESCRIPTION OF THE INVENTION
The embodiments of the present invention will now be described in detail with reference to the accompanying drawings. However, the present invention is not limited to the embodiments. The present invention can be modified in various forms. Thus, the embodiments of the present invention are only provided to explain more clearly the present invention to the ordinarily skilled in the art of the present invention. In the accompanying drawings, like reference numerals are used to indicate like components.
The specification may refer to “an”, “one” or “some” embodiment(s) in several locations. This does not necessarily imply that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms “includes”, “comprises”, “including” and/or “comprising” when used in this specification, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations and arrangements of one or more of the associated listed items.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The present invention focus on SLM based solution to all-in-focus image capture. The method disclosed in present invention can be implemented in wafer inspection with line scan camera during semiconductor manufacturing. The leading edge of semiconductor manufacturing is already at node size of 14nm, even defects that span just one pixel at the highest magnification of the optical system can be detrimental to the device performance. Thus obtaining focused images becomes an important aspect of improving defect detection and hence quality and yield of the wafers. Likewise, the present invention also enables fast and motionless auto focus mechanism, which can be integrated with any adjustable focus camera as well as methods to obtain shift variant aberration correction and foveation for live video feed. One embodiment of present invention implements the phase modulation of transmissive spatial light modulator (SLM) to create segmented lenses in pre-determined regions within the SLM to obtain all-in-focus high resolution images. Moreover, various embodiment of present invention provides correction of shift variant aberration and real time foveation in live video feed.
Figure 1 illustrates a schematic representation of a 2 dimensional spatial light modulator (SLM). The present invention incorporates the Spatial Light Modulator (SLM) 101 in the image capturing system. The SLM is currently being commonly used in various forms in LCD display panels and projectors. The SLM 101 as shown in Figure 1 consists of 2D array of individually addressable segments (102A to 102N). Each array segment has a unique capability of modulating amplitude, phase or polarization of incident light based on a control signal applied on the each of the segments. The control signals can be applied voltage, current or an optical pulse. The control signals are applied using electrodes. The electrode is made of transparent conductive material such as Indium Tin Oxide in case of transmissive SLM. The electrodes are indicated by reference numeral 103A to 103N. SLMs used for image capturing purpose are based on materials such as liquid crystals (LCs), Lithium Niobate (LiNbO3) etc. Alternatively, the SLM can consist of elements made of micro-mirrors. SLMs have three main types of modulation format such as amplitude, phase and polarization. For each of these formats, there are two types of SLMs available such as reflective and transmissive. An exemplary embodiment of present invention uses the phase modulation property of transmissive SLM to enable all in focus and auto focus high resolution image generation according to present invention. One exemplary embodiment of present invention locates the SLM in a unique configuration to transform the SLM into multi-lens structure with electrically/optically tunable focal lengths.
SLM arrays with 1Million Pixel resolution are available. As the SLM is electrically/optically controlled, the phase of SLM can potentially be changed at a rate of 1.4 kHz. The rate of phase change of the SLM is high compared to capturing speed of most general purpose cameras. However, the said speed is not sufficient for the high accuracy image capturing devices such as line scan cameras, which can capture at even higher speed. Hence, one embodiment of present invention provides a method of all-in-focus image capture during wafer inspection with line scan camera. Another embodiment of present invention provides fast and motionless auto focus mechanism, correction of shift variant aberration and real time foveation in live video feed using SLM.
The refractive index of birefringent elements of SLM can be modified by application of electric/optic control signal. For instance, consider that the control signal is applied voltage. Then the relation between phase-change and refractive index is given by:
------------------- (1)
where H is the thickness of SLM element, ? is the wavelength of light and is the refractive index change based on the applied electric field E. In case of quasi-monochromatic beam of light, ? is substituted by the average wavelength denoted by ?_avg.
Figure 2A illustrates a schematic representation of an image capturing system, according to one embodiment of present invention. The image capturing system according an exemplary embodiment of present invention includes one or more lenses 201, one or more spatial light modulator (SLM) 202 and camera sensors 203. The one or more lenses 201 are used to focus one or more objects in a region of interest. The spatial light modulator (SLM) 202 enables auto focusing of one or more objects in the region of interest, which are out of focus in the region of interest by the one or more lenses. The principle of working and the construction of the SLM are explained in detail in Figure 1. The camera sensor 203 captures an image of the focused and the auto focused one or more objects in the region of interest. The camera sensors implemented in the image capturing system according to one embodiment of present invention are charge coupled device (CCD) arrays. The region of interest is a field of view of the image capturing system at the time of capture.
In one embodiment, the SLM 202 comprises a plurality of array of individually addressable segments which change the properties of incident light. The addressable segments are made of at least one of movable elements and stationary elements. Focal length of each of the addressable segments of the SLM is optimized by applying a dynamically varying control signal to each segments of the SLM.
In one embodiment, the image capturing system further comprises a phase detection assembly for maximizing overlap of one or more partial images generated by the lens and aperture mask of the image processing system for enabling auto focusing. The phase detection assembly comprises a focus change feedback module and an image processing module. The focus change feedback module provides feedback from camera sensor to optimize the focus of the region of interest in order to encode the SLM with a phase settings of the SLM elements which converts the SLM into lens that has the focal length necessary to focus on the region of interest. The image processing module is connected with the camera sensor and the focus change feedback module for processing a captured image.
Figure 2B is a flow diagram illustrating a method of enabling spatially varying auto focusing of one or more objects using an image capturing system, according to one embodiment of present invention. The method of enabling spatially varying auto focusing of one or more objects using an image capturing system provides an all in focus, auto focused high resolution image. In order to enable spatially varying auto focusing of one or more objects using an image capturing system, at step 205, the one or more objects in a region of interest is focused by one or more lenses in the image capturing system. However, there are multiple objects which are not in a preferred focal length from the lens. Those images are inevitably blurry since the lens cannot focus simultaneously. This reduces the quality of the images. Hence, at step 206, the spatial light modulators (SLM) enables spatially varying auto focusing of one or more objects in the region of interest which are out of focus in the region of interest by the one or more lenses. This enables an all in focus effect. The focal length of each segment of the SLM is changed by applying a control signal at each of segments of the SLM. Further, at step 207, the focused and the auto focused objects in the region of interest are captured as an image by a camera sensor.
Figure 3 illustrates a system for wafer inspection process using line scan camera according to prior art. The line scan camera is typically used for wafer inspection due to its superior sensitivity at high speed acquisition. The major elements of the line scan camera consists of a lens system 302 and camera sensors such as CCD array. A view under the line scan camera lens consists of a number of features such as mesas, trenches, gratings etc. that are located at different Z heights with respect to some reference layer in the die 301. Figure 2 represents such a scenario where a mesa is under the lens. The distance between the lens and the CCD is such that the field is in focus. It is apparent from Figure 3, that the top of the mesa 301 at distance ur -Z1 from the lens is out of focus while the field layer at distance ur –Z0 is in focus. Since a lens can have only one object plane in complete focus at any given time, the top of mesa is out of focus and thereby can appear blurry. This situation is depicted by rays from field converging at the surface of CCD and those from the mesa converging behind the CCD. The images generated using such systems are inevitably blurry since the lens cannot focus at all the Z heights simultaneously. This reduces the quality of the images and thereby degrades the accuracy of defect inspection process that uses these images. In an exemplary embodiment, the magnification of the system as shown in Figure 3 is shown to be around 1X, in reality it can be ~30-100X.
Figure 4A illustrates a system for wafer inspection process using line scan camera employing SLM, according to one embodiment of present invention.
The line scan camera according to one embodiment of present invention comprises a lens system 402, a camera sensor and SLM. In an exemplary embodiment of present invention, the SLM 403 is located close to the camera sensor at a distance of few hundred microns. The camera sensor 404 implemented in an exemplary embodiment of present invention is CCD array. The exact separation is obtained implicitly during calibration procedure of the SLM, which is discussed in Figure 5A and Figure 5B. The line scan camera further comprises a light polarizer that filters out any light that is not linearly polarized along the extraordinary axis of the SLM, which is not shown in figure 4A. It can be independently placed before the first element of camera lens and not an essential part of the line scan camera. For instance, consider that a section of wafer 401 with mesa and field surrounding the wafer is under the lens. The SLM is calibrated to focus each portion of the wafer 401. This enables both mesa and flat portion of the wafer to be in focus. The rays from mesa as well as the flat surface of the wafer are converging at the surface of CCD arrays.
The Figure 4A illustrates the following distances and dimensions:
L=distance between CCD and SLM,
d=distance between SLM and final element of lens system,
ur + Z0=distance of flat from the first element of lens system,
W=diameter of the camera lens, and
G= length of CCD and SLM.
In order to provide higher accuracy in defect inspection process of wafer, the Z map of the die to be scanned is obtained. Further, the SLM which is implemented in the line scan camera is calibrated corresponding to the Z map of the die. During scanning of the die, the control signal to each segments of the SLM varies according to the portion of die under the view of lens of the line scan camera.
Figure 4B illustrates a method for wafer inspection process using line scan camera employing SLM, according to one embodiment of present invention. In order to initiate the wafer inspection process, at step 405, the Z map of die of a wafer to be inspected is obtained. The method of obtaining the Z map is explained in detail in Figure 5A and 5B. Once the Z map is obtained, the SLM is calibrated for the control signal. The control signal may be electrical or optical signal as indicated in step 406. The refractive index of the SLM changed based on the control signal. This results in focal length change. Hence, the different control signal is applied to each segments of the SLM as explained in Figure 1. Therefore, each segments of SLM possess different focal length, which enables diverging to rays from the die toward the CCD array using the SLM. The calibration procedure of the SLM is described in detail in Figure 6A and Figure 6B. Further, at step 407, the control signal is varied dynamically corresponding to each segment of the SLM for providing different focal length using values from calibration and Z map of die below the image capturing system in real time. This enables all in focus effect on the die at the real time. Hence, the accuracy of the scan increases.
Figure 5A is a schematic representation of process for obtaining Z map of a die, according to one embodiment of present invention. In order to calibrate the SLM for dynamically change the focal length, the height of any (X, Y) location on the die with respect to a reference layer need to be calculated. This is referred as the Z map of the die. To obtain Z map, the thickness of all the layers that overlap at any (X, Y) location at a given inspection step in the process flow, is computed. The Information of the layers is obtained from the design layout as shown at 501. Likewise, the information of the thickness is obtained by process knowledge. Moreover, cross-sectioning the die at selected locations also provides information on thickness of the die as indicated at 502. The Z map is obtained from 501 and 502. Further, each layer and corresponding Z heights are tabulated as shown in 503. Then, the die that is composed of four main layers depicted with respective Z heights as Z0, Z1, Z2 and Z3, while a fifth layer is formed by overlap of second and fourth layer with a height of Z1 +Z3 as indicated at 506 is identified.
Upon obtaining this information, the Z map is quantized by ?Zc which corresponds to the Z difference that results in acceptable circle of confusion. For any lens system, a circle of confusion is formed for objects that are not at perfect distance as determined by the lens formula. The range of distance around an object that results in acceptable circle of confusion is called the depth of field (?Zc) and is given as follows:
------------------ (2)
where c is the diameter of circle of confusion;
W is the lens diameter;
M is the lens magnification;
L=distance between CCD and SLM; and
d=distance between SLM and final element of lens system
Further, the different layers of the die are clustered based on the Z heights. Hence, all the layers belonging to a cluster, that are separated by less than ?Zc. ?ZC is the depth of focus as determined by acceptable circle of confusion. The Z value for a cluster is the average of Z heights of the layers belonging to that cluster. Such clustering mechanism is termed as quantization, which reduces the cost of computation without decreasing image sharpness.
The said relation is depicted in the following equation where i and j indicate different Z clusters:
--------------- (3)
For example, the first and second layers are clustered together with an average height of (Z1 +Z2)/2. Typically, the minimum radius of circle of confusion is limited by the resolution of the lens and camera sensor system. Also, it is not necessary to obtain Z map of entire die; it can be limited to only the regions of interest which are deemed critical to functioning of the die at that inspection step.
Figure 5B is flow diagram illustrating the process of generating Z map of a die, according to one embodiment of present invention. In order to calibrate the SLM according to the die, at step 507 the cross section, design layout and deposition thickness of the die to be inspected are analyzed to obtain Z height map of die. Once the Z height map is obtained, at step 508 each Z height of the die is quantized by making a set of cluster based on the pre-defined acceptance circle of confusion. This is termed as layer clustering
Figure 6A is an exemplary set up for performing calibration of SLM, according to one embodiment of present invention. Once the Z map is obtained after layer clustering, the SLM is calibrated. The calibration of SLM is performed by moving a calibration die under the lens. The exemplary set up for calibration as indicated in figure 6A includes a die 601. A camera lens 602, the SLM 603 to be calibrated, the CCD 604. A computer 605 is coupled with the CCD 604, a calibration setting module 606 and an electronic controller 607 are coupled with the computer and the SLM 603.
The die 601 consists of layers which are separated by same Z values as the clusters in the Z map. This die used for calibration can be a calibration die or die from the wafer which needs to be inspected. Typically, the calibration is to be run only once for a give layout or process. The calibration is a part of semiconductor manufacturing process. The camera lens 602 is first focused on the reference layer which is at distance ur from the lens 602. For example, consider that 0th layer is a reference layer of die. i.e.Z0 = 0. The next layer, Z1 layer is brought under the lens appears blurry. Then the phase-change of the SLM segment is varied by application of control signal according to the following formula until an image with maximum sharpness is obtained on the CCD 604.
--------------------- (4)
Where ?avg is the average wavelength of illumination which is around 550 nm for full visible spectrum illumination, fz1 is focal length of SLM such that the layer at Z1 is focused on the CCD.
Likewise, x and y correspond to horizontal and vertical index respectively of the SLM
from its center. By substituting for ??from equation 3, we get the relation between the refractive index change required for any SLM element as a function of corresponding x and y index as follows:
-------------- (5)
The refractive index change is independent of the wavelength ? in ideal case. Moreover, the variation of phase-change with distance from center depicted in equation 4 is similar to that of a thin lens. Thus, the step up for calibration of SLM is converted to a compound lens. The focal length of the second lens (SLM lens) is related to the object and image distance which is given as follows:
---------------------- (6)
Where f1 is the focal length of the camera lens;
d is the distance between the camera lens and the SLM;
u is the distance of the object in front of camera which is given by (ur - Z1) for
Z1 layer; and
v is the distance of the image behind the SLM which in this case is same as the distance between the SLM and the CCD (L).
According to one exemplary embodiment of present invention, the imperfections in the camera lens such as local variations in the radius of curvature or offset between optical axes of the segments of the SLM accounts for the approximation sign in front of the constant K in equation 4. Such variations are resolved in calibration procedure through fine tuning of the optimized phase-change values. Expected phase change values typically vary as square of the distance of SLM element from the center. The enlarged phase change pattern corresponding to the phase change values is shown at the top (603). The phase-change values are stored in memory and indexed with the corresponding Z value as well as the location of each segments of the SLM element in corresponding 2D matrix.
Figure 6B depicts a graph showing variation of SLM element phase-change in X direction for different values of the focal length of the SLM lens according to an exemplary embodiment of the present invention. In this exemplary embodiment, the graphs shows one dimensional view of the calculated phase-change values of the SLM elements for two different focal lengths (i.e. 800microns in blue and 1200microns in red). In this case, 1Million Pixel SLM is considered which puts the center of lens at 500th element. The sharp discontinuities are shown in graph which correspond to a phase wrapping technique. The discontinuities occur at different elements depending on the focal length.
In one embodiment, the phase-change ???x, y,Z1) is greater than the tuning limit of SLM. In another embodiment of present invention, the technique of phase wrapping is used whereby the elements of SLM are coded with [ ???x, y,Z1) mod2?] to get an equivalent effect. The above procedure is then repeated for all the layers of interest on the die. The image on the CCD is analyzed by the computer for the clarity. The calibration settings module 606 and electronic controller 607 act as a feedback system. Based on the image captured on the CCD, the calibration setting module 606 regulates the control signal provided by the electronic controller 607.
Figure 7A is a schematic representation of wafer inspection process of a die having various layers with different heights, according to one embodiment of present invention. According to one embodiment of present invention, the SLM is dynamically segmented based on the control signal. A control signal provides a phase-change for the incident light at each segment of SLM corresponding to the Z height of the layer under the view of lens of the line scan camera. The knowledge of region underneath the lens is obtained from the information of the starting point of the stage relative to the edge of the first die and the stage speed. The starting point of the stage is determined based on the fact that where the first die is located relative to the point from where the stage starts its motion. The stage speed is the distance the stage moves in unit time.
The phase-change values are stored in the memory during the calibration procedure described in Figure 6A. In another embodiment of present invention, the SLM is considered as a union of different segments with phase coding such as ???x, y, Z1 ?, ? x, y,Z2??. The wafer (die) moves from right to left with the speed of the inspection stage. The CCD 404, SLM 403 and lens elements are held stationary. During the said motion, different regions come under the SLM requiring dynamic change in the phase-change settings. From the knowledge of the stage speed and the die layout these phase-change settings can be recalled from memory and applied appropriately. The focal length of each segments of the SLM changes with the phase change. Hence, each portion of die having different Z heights focuses on the CCD using the SLM 403.
Figure 7B illustrates different regions of SLM corresponding to the die having various layers with different heights, according to one embodiment of present invention. In one exemplary embodiment of present invention, the control signal is dynamically varied corresponding to each segment of the SLM for providing different focal length using values from calibration and Z map of die below the imaging section in real time. Different regions of SLM corresponding to the position of die are shown in figure 7A. The SLM is segmented into two regions such as one corresponding to Z0 (403 a) and another to Z2 (403 b).
From the figure, it is clear that the region corresponding to Z0 (a) doesn’t have any phase-change since the lens of the camera is focused on Z0. However, the region corresponding to Z2 (b) acts as a lens with focal length f_Z2 to bring that region in focus. The said region is truncated to limit the applicability to Z2 section of the die. The whole SLM as shown in (403 c) is combination of 403a and 403b regions.
Figure 8 depicts a die image captured using the image capturing system according to one exemplary embodiment of present invention. This figure also illustrates cross sectional view of the die. According to the figure 8, the region A is the bottom layer, region C is 1µm above region A and region B is 10µm above A. Few defects (white lines crossing regions B and C) as shown in the image is the defects in the die inspected by the line scan camera according to one embodiment of present invention. The horizontal cross section of die though line XY is shown in 801. The Z height variation of various regions A. B and C corresponding to horizontal cross section in XY plane is indicated in 802. The Z height variation in vertical cross section is shown in 803 and 804.
Figure 9A depicts a die image captured using the image capturing system, where the speed of SLM is matched with the camera sensor, according to one exemplary embodiment of present invention. In an exemplary embodiment of present invention, the speed of capturing of camera sensor is equal to the rate of change of control signal at each segment of the SLM. i.e., the CCD is configured with speed matched with the SLM. Figure 9AA shows the entire image, whereas Figure 9AB shows an enlarged area around the defects. It is clear from the figure that the ability of SLM to focus at multiple depths, both B and C regions as mentioned in Figure 9A appear sharp. This enables an “all-in-focus” high resolution scanning of wafer.
Figure 9B depicts a die image captured using the image capturing system, where the speed of SLM is 1/10th of the camera sensor, according to another exemplary embodiment of present invention. In this exemplary embodiment of present invention, an SLM is adapted to run with a speed 1/10th of the CCD Figure 9BA shows the entire image while Figure 9BA shows an enlarged area around the defects. Due to the ability of SLM to focus the regions at multiple depths, both B and C regions appear sharp. As a result, majority of the defects are clearly visible.
Figure 10 illustrates an image capturing system with phase detection (PD) for auto focusing, according to prior art. The image capturing system with phase detection mechanism of auto focussing is prevalent in most high end smart phones. In such systems, the rays from top (solid) and bottom (dashed) of the camera lens 1002 are separated, typically by an aperture mask 1003 or beam splitter, and captured at the CCD 1004 as two partial images 1005. The captured partial images are analyzed for overlap. The overlap is a function of the distance between the two partial images 1005. A feedback signal is sent to the camera lens motion mechanism to maximize the overlap. The camera lens motion mechanism comprises an image processing module 1006 and the lens motion feedback module 1007. The image processing module 1006 processes partial images 1005 to identify the overlap between the partial images. The lens motion feedback module 1007 identifies the physical motion required for the camera lens to generate a high resolution image. The method as explained according to the prior art involves motion of the camera lens 1002 to obtain best focus of the object 1001. This leads to latency and delay.
Figure 11A illustrates an image capturing system for auto focusing according to one embodiment of present invention. In one embodiment of present invention, the image capturing system with the PD assembly for auto focusing comprises a camera lens 1102 for focusing an objection 1101, aperture mask 1102, an SLM 1104, a camera sensor 1105, an image processing module 1107 coupled with the camera sensor 1105 and a focus change feedback system 1108. According to present invention, the PD assembly gives feedback to the SLM for change in focus required to obtain maximum overlap. Accordingly, a control signal is applied to the SLM to change the focal length of the SLM, to convert the SLM as a lens of required focal length. Since the relative positions of the lens, SLM and CCD don’t change, only two values of SLM focal length are possible for a given s:
------------------- (7)
where f_SLM is the SLM focal length, W is the width of the lens, L is the distance between the SLM and CCD, d is the distance between the lens and SLM and s is the separation between two partial images prior to optimization of focus. The partial images are generated when no phase-change is applied to the SLM elements. The focal length of SLM is modified based on the overlap value between the partial images. These two values can be stored as a look up table for further increasing the auto focus speed. This focal length and s bear a fixed relation that can be converted to a look up table. The image processing module 1107 processes the partial images and identifies the modification required. The focus change feedback module 1108 applies the control signal to the SLM corresponding to required focal length change. There is no mechanical motion is required to auto focus the objects in the region of interest according to present invention.
Further, knowledge of focal length of the compound lens system, f lens, results in object distance as follows:
------------- (8)
where the sign convention is opposite to that of equation 7. If positive sign is used to calculate f SLM, then negative sign is to be used in equation 8 and vice versa. Due to high speed of SLM, a focus and object distance matrix is built across the field of view which directly aid in creation of 3D model of the scene. Although the above description is for PD configuration and auto focusing, the solution can be easily extended to other active and passive methods of auto focus. For instance, in case of contrast detection method, the focal length of SLM is scanned over a pre-determined distance to find the value that maximizes the contrast. For active methods that directly give the object distance such as using infrared (IR) or ultrasound sensors, equation 6 is used to determine the SLM focal length. In one embodiment of present invention, the focal length of SLM can be modified within limited time. Due to high speed of SLM, the limiting factor for auto focus in image capturing system according to present invention is image processing rather than focal length adjustment. This opens up new areas of applications in the field of high speed object tracking. In one embodiment of present invention, the modification of focal length of the SLM is fast. In ordinary cameras the speed of capture of normal CCD is much slower (~60Hz) than line scan CCD. It is much less compared to refresh rates of SLM (commercial ~ 200Hz, in research~1.4 kHz ). In the present embodiment, the SLM functions as a single lens without segmentation.
Figure 11B illustrates a method of enabling auto focusing of one or more objects in the region of interest using the spatial light modulators (SLM) for motionless auto focus by a phase detection assembly, according to one embodiment of present invention. At step 1110, the SLM is converted into a lens by adjusting the phase of incident light beam passing through the SLM. Then the focal length of the lens is modified dynamically based on the distance from one or more objects in the ROI as indicated in step 1111. Further, at least two partial images of the region of interest (ROI) are generated using the lenses and an aperture mask of the image capturing system as shown in step 1112. At step 1113, a feedback signal is sent to the SLM using a focus change feedback module of the image capturing system to maximize overlap of the partial images of region of interest generated using the lenses and an aperture mask for enabling auto focusing. The feedback signal is generated based on the knowledge of distance from one or more objects from the lens.
Figure 12 illustrates an image capturing system for aberration correction, according to one embodiment of present invention. In an exemplary embodiment, at least two SLM is implemented in the image capturing system in order to enable aberration correction. The first SLM 1202 is placed close to the camera lens 1201 and second SLM 1203 is located near the CCD 1204. The second SLM 1203 near the CCD 1204 is used for multi focus and auto focus mechanisms. The first SLM 1202 near the lens is coded with appropriate Zernike polynomials to correct aberrations. The focus of second SLM 1203 is adjusted to get the best image for object (O1) on the CCD while maintaining the first SLM1202 at zero phase-change. Further, phase of the first SLM 1202 is adjusted with feedback received from CCD 1204 to further improve the image. The feedback is sent to the first SLM 1202 using feedback module 1205. Such adjustment corresponds to encoding it with Zernike polynomials with increasing orders. This procedure is repeated for object 2 (O2) and so forth to obtain a list of Zernike coefficients for a 3D matrix.
The blurriness in an image may be due to defocus and aberration. In order to generate a clear image, the second SLM 1203 near the CCD is tuned to eliminate defocus and aberration. Once the optimum focus is obtained, the remaining distortion or defocus is due only to aberration which can be corrected as described in Figure 12. In order to perform multi path aberration correction, multiple SLM are used to tune aberration correction for specific object in the field of view.
Figure 13 is a flow diagram illustrating a method of enabling auto focusing of one or more objects in the region of interest using at least two spatial light modulators (SLM) for removing aberration, according to one embodiment of present invention. At step 1301, at least one object in the region of interest is identified. A small spot sized object such as beam from He-Ne laser is placed at different locations in the object plane based on a desired 2D grid pattern to define the 2D grid in the object plane as shown at step 1302. A laser beam is placed at the grid point at step 1303. Then, the beam is focused using the second SLM 1203 at step 1304. The aberration correction for the first SLM 1202 is obtained at step 1305. For each location, the second SLM 1203 is adjusted so that best focus is obtained. Further, the aberration correction is performed by sending feedback to the first SLM to obtain the list of Zernike coefficients. By repeating the said step for all locations in the grid, a 2D aberration correction matrix is obtained. Likewise, by repeating the said procedure for different object planes, a 3D aberration correction matrix is obtained. Thus, each point in the 3D matrix has a list of corresponding Zernike coefficients. These coefficients are called aberration correction parameters. The aberration correction parameters can be stored in the camera memory of the image capturing device according to one embodiment of present invention at step 1306.
Further, at step 1307, it is determined whether aberration correction is performed for all the grid point corresponding to the object. If not, then at step 1308, the laser beam is shifted to a new grid point and steps from 1304 to 1306 are repeated for the new grid point. Once the aberration corrections for all the points are done, then it is determined whether the aberration correction for all the objects in the region of interest is done at the step1309. If not, then the steps from 1301 to 1309 are repeated until the last object.
Figure 14 is a flow diagram illustrating a method of capturing an aberration corrected image at real time, according to one embodiment of present invention. In order to preform aberration correction at real time based on using aberration matrices stored previously in the image capturing device, the user need to select the region of interest to be corrected. At step 1401, it is determined whether the user has selected a region of interest. If the region of interest is selected by the user, then the second SLM 1203 of the image capturing system is focused on the selected region at step 1402. The distance from each of the object in the region of interest is calculated from the image capturing system at step 1403. At step 1404, the aberration correction parameters are retrieved from memory. The retrieved aberration correction parameters are applied on the first SLM 1202 of the image capturing system at step 1405.
Whereas, if the region of interest is not selected by the user, then the field of view of the image capturing device is divided in to pre-determined regions at step 1406. Subsequently, each of the regions is selected for aberration correction as indicated at step 1407. Then, at step 1408, the second SLM 1203 of the image capturing system is focused on the selected region. The distance from each of the object in the region of interest is calculated from the image capturing system at step 1409. At step 1410, the aberration correction parameters are retrieved from memory. The retrieved aberration correction parameters are applied on the first SLM 1202 of the image capturing system at step 1411. Further, at step 1412, it is determined whether the aberration correction is done at each region. if not, then the steps from 1407 to 1411 are repeated for each of the regions. Finally, the images corresponding to each of the region is used to generate a final composite image at step 1413.
Figure 15 is a flow diagram illustrating a method of enabling auto focusing of one or more objects in the region of interest using one or more spatial light modulators (SLM), for spatially auto focusing foveation spot, according to one embodiment of present invention. At step 1501, one or more foveation spots are selected by communicating at least one of a shift in gaze information and change in pointing device information to the image capturing system upon identifying the object in focus. The gaze Information corresponds to the information of where on the screen the user is looking (such as obtained from a device that detects the gaze of the user or a pointer in hand of the user which he uses to point on the video screen) is obtained and delivered to the video camera that contains the SLM. This forms the region of interest in the video frame. Based on this information, the SLM phase profile is altered to bring the objects in the region of interest in focus by forming segmented lenses in the SLM. At step 1502 a control signal is applied to one or more segments of SLM for spatially auto focusing the selected foveation spot.
Although the invention of the method and system has been described in connection with the embodiments of the present invention illustrated in the accompanying drawings, it is not limited thereto. It will be apparent to those skilled in the art that various substitutions, modifications and changes may be made thereto without departing from the scope and spirit of the invention. ,CLAIMS:
1. A method of enabling spatially varying auto focusing of one or more objects using an image capturing system comprising:
focusing on one or more objects in a region of interest by one or more lenses in the image capturing system;
enabling spatially varying auto focusing of the one or more objects in the region of interest using one or more spatial light modulators (SLM), wherein at least one of the one or more objects are out of focus in the region of interest; and
capturing the focused and the auto focused objects in the region of interest by a camera sensor.
2. The method as claimed in claim 1, wherein an SLM of the one or more SLMs comprises one or more segments.
3. The method as claimed in claim 2, further comprising:
applying a control signal to at least one of the SLM and each segment of SLM; and
modifying refractive index of each segments of the SLM and phase of light beam incident on each of segments of SLM using the applied control signal.
4. The method as claimed in claim 1, wherein the control signals is at least one of an applied voltage, current and optical pulse.
5. The method as claimed in claim 1, wherein the control signal provides quadratic variation in phase change across the SLM, where the phase change corresponds to change in focal length of the SLM.
6. The method as claimed in claim 1, wherein enabling auto focusing of the one or more objects in the region of interest using the spatial light modulators (SLM) for wafer inspection comprising:
obtaining a Z map of a die of a wafer to be inspected; and
calibrating focal length of the SLM for identifying corresponding control signal based on the Z map of the die for enabling auto focusing of the die of a wafer to be inspected on the camera sensor; and
dynamically varying control signal corresponding to each segment of the SLM for providing different focal length using values from calibration and Z map of die below the imaging section in real time.
7. The method as claimed in claim 6, wherein obtaining the Z map of die of a wafer to be inspected comprising:
analyzing a cross section of the die to be inspected to obtain Z height of die; and
quantizing the Z height of the die based on the pre-defined acceptance circle of confusion.
8. The method as claimed in claim 6, wherein calibrating focal length of the SLM for identifying corresponding control signal based on the Z map of the die comprising:
locating the die having the Z height at a predefined distances in front of a camera system for calibrating focal length of SLM;
optimizing the control signal corresponding to each segments of the SLM of the image capturing system to bring about corresponding phase change in each segments of the SLM for capturing an image with a pre-defined clarity on the camera sensor; and
storing the optimized control signal and corresponding phase-change of each segments of SLM of the Z map of the die in memory of the image capturing.
9. The method as claimed in claim 1, wherein enabling auto focusing of one or more objects in the region of interest using at least two spatial light modulators (SLM) for removing aberration comprising:
positioning a first SLM and a second SLM between the lens and the camera sensor;
changing the focal length of the second SLMs by providing control signal to the SLMs to enable auto focus; and
optimizing the a phase profile of the first SLM based on feedback received from the camera sensor to further improve the image corresponds to encoding the image with Zernike polynomials with increasing orders; and
storing the optimized phase profile of the first SLM in the memory of the image capturing device.
10. The method as claimed in claim 9, wherein capturing an aberration corrected image at real time comprising:
determining whether the region of interest is selected by a user;
applying aberration correction to the fist SLM corresponding to the region of interest, if the region of interest is selected by the user;
dividing field of view of the image capturing system into one or more pre-determined region of interest, if the region of interest is not selected by the user;
applying aberration correction to the first SLM corresponding to each of the region of interests;
capturing image of each of the region of interest; and
forming a composite image by combining the captured images of each of the region of images.
11. The method as claimed in claim 1, wherein enabling auto focusing of one or more objects in the region of interest using the one or more spatial light modulators (SLM), for one or more foveation spots comprising:
selecting the one or more foveation spots by communicating at least one of a shift in gaze information and change in pointing device information to the image capturing system upon identifying the object in focus; and
applying a control signal to one or more segments of SLM for auto focusing the selected foveation spot.
12. An image capturing system comprising:
one or more lenses for focusing one or more objects in a region of interest;
one or more spatial light modulator (SLM) enabling auto focusing of the one or more objects in the region of interest, which are out of focus in the region of interest by the one or more lenses; and
a camera sensor for capturing the focused and the auto focused one or more objects in the region of interest.
13. The image capturing system as claimed in claim 12, wherein the SLM comprises
a plurality of array of individually addressable segments which change the properties of incident light, where the segments are made of at least one of movable elements and stationary elements.
14. The image capturing system as claimed in claim 13, wherein focal length of each of the addressable segments of the SLM is optimized by applying a dynamically varying control signal to each segments of the SLM.
15. The image capturing system as claimed in claim 12, further comprising:
a phase detection assembly for maximizing overlap of one or more partial images generated by the lens and aperture mask of the image processing system for enabling auto focusing.
16. The image capturing system as claimed in claim 15, wherein the phase detection assembly comprises:
a focus change feedback module for providing feedback from camera sensor to optimize the focus of the region of interest in order to encode the SLM with a phase setting of the SLM elements which converts the SLM into lens that has an optimum focal length to focus on the region of interest; and
an image processing module connected with the camera sensor and the focus change feedback module for processing a captured image.
17. The image capturing system as claimed in claim 12, further comprising:
a light polarizer for filtering light beams that are not polarized along the extraordinary axis of the SLM.
18. The image capturing system as claimed in claim 12, wherein the SLM is converted into a lens by varying the focal length of each of the segments.
| # | Name | Date |
|---|---|---|
| 1 | 1542-CHE-2015-IntimationOfGrant04-01-2024.pdf | 2024-01-04 |
| 1 | SRIB-20140922-006_Provisional Specification_filed with IPO on 25th march, 2015.pdf | 2015-03-28 |
| 2 | 1542-CHE-2015-PatentCertificate04-01-2024.pdf | 2024-01-04 |
| 2 | SRIB-20140922-006_Drawings_filed with IPO on 25th march, 2015.pdf | 2015-03-28 |
| 3 | POA_Samsung R&D Institute India-new.pdf | 2015-03-28 |
| 3 | 1542-CHE-2015-CLAIMS [10-10-2019(online)].pdf | 2019-10-10 |
| 4 | 1542-CHE-2015-COMPLETE SPECIFICATION [10-10-2019(online)].pdf | 2019-10-10 |
| 4 | 1542-CHE-2015 POWER OF ATTORNEY 25-05-2015.pdf | 2015-05-25 |
| 5 | 1542-CHE-2015-FER_SER_REPLY [10-10-2019(online)].pdf | 2019-10-10 |
| 5 | 1542-CHE-2015 FORM-1 25-05-2015.pdf | 2015-05-25 |
| 6 | 1542-CHE-2015-OTHERS [10-10-2019(online)].pdf | 2019-10-10 |
| 6 | 1542-CHE-2015 CORRESPONDENCE OTHERS 25-05-2015.pdf | 2015-05-25 |
| 7 | OTHERS [31-08-2015(online)].pdf | 2015-08-31 |
| 7 | 1542-CHE-2015-AMENDED DOCUMENTS [12-07-2019(online)].pdf | 2019-07-12 |
| 8 | Drawing [31-08-2015(online)].pdf | 2015-08-31 |
| 8 | 1542-CHE-2015-FORM 13 [12-07-2019(online)].pdf | 2019-07-12 |
| 9 | 1542-CHE-2015-RELEVANT DOCUMENTS [12-07-2019(online)].pdf | 2019-07-12 |
| 9 | Description(Complete) [31-08-2015(online)].pdf | 2015-08-31 |
| 10 | 1542-CHE-2015-FER.pdf | 2019-04-12 |
| 10 | Assignment [31-08-2015(online)].pdf | 2015-08-31 |
| 11 | abstract - 1542-che-2015 (B).jpg | 2016-09-14 |
| 11 | REQUEST FOR CERTIFIED COPY [01-02-2016(online)].pdf_38.pdf | 2016-02-01 |
| 12 | abstract - 1542-che-2015.jpg | 2016-09-14 |
| 12 | REQUEST FOR CERTIFIED COPY [01-02-2016(online)].pdf | 2016-02-01 |
| 13 | Request For Certified Copy-Online.pdf | 2016-02-10 |
| 13 | Request For Certified Copy-Online.pdf_1.pdf | 2016-02-17 |
| 14 | Request For Certified Copy-Online.pdf | 2016-02-10 |
| 14 | Request For Certified Copy-Online.pdf_1.pdf | 2016-02-17 |
| 15 | abstract - 1542-che-2015.jpg | 2016-09-14 |
| 15 | REQUEST FOR CERTIFIED COPY [01-02-2016(online)].pdf | 2016-02-01 |
| 16 | abstract - 1542-che-2015 (B).jpg | 2016-09-14 |
| 16 | REQUEST FOR CERTIFIED COPY [01-02-2016(online)].pdf_38.pdf | 2016-02-01 |
| 17 | Assignment [31-08-2015(online)].pdf | 2015-08-31 |
| 17 | 1542-CHE-2015-FER.pdf | 2019-04-12 |
| 18 | 1542-CHE-2015-RELEVANT DOCUMENTS [12-07-2019(online)].pdf | 2019-07-12 |
| 18 | Description(Complete) [31-08-2015(online)].pdf | 2015-08-31 |
| 19 | 1542-CHE-2015-FORM 13 [12-07-2019(online)].pdf | 2019-07-12 |
| 19 | Drawing [31-08-2015(online)].pdf | 2015-08-31 |
| 20 | 1542-CHE-2015-AMENDED DOCUMENTS [12-07-2019(online)].pdf | 2019-07-12 |
| 20 | OTHERS [31-08-2015(online)].pdf | 2015-08-31 |
| 21 | 1542-CHE-2015 CORRESPONDENCE OTHERS 25-05-2015.pdf | 2015-05-25 |
| 21 | 1542-CHE-2015-OTHERS [10-10-2019(online)].pdf | 2019-10-10 |
| 22 | 1542-CHE-2015 FORM-1 25-05-2015.pdf | 2015-05-25 |
| 22 | 1542-CHE-2015-FER_SER_REPLY [10-10-2019(online)].pdf | 2019-10-10 |
| 23 | 1542-CHE-2015 POWER OF ATTORNEY 25-05-2015.pdf | 2015-05-25 |
| 23 | 1542-CHE-2015-COMPLETE SPECIFICATION [10-10-2019(online)].pdf | 2019-10-10 |
| 24 | 1542-CHE-2015-CLAIMS [10-10-2019(online)].pdf | 2019-10-10 |
| 24 | POA_Samsung R&D Institute India-new.pdf | 2015-03-28 |
| 25 | SRIB-20140922-006_Drawings_filed with IPO on 25th march, 2015.pdf | 2015-03-28 |
| 25 | 1542-CHE-2015-PatentCertificate04-01-2024.pdf | 2024-01-04 |
| 26 | SRIB-20140922-006_Provisional Specification_filed with IPO on 25th march, 2015.pdf | 2015-03-28 |
| 26 | 1542-CHE-2015-IntimationOfGrant04-01-2024.pdf | 2024-01-04 |
| 1 | searchstrategy_11-04-2019.pdf |