Abstract: The invention relates to a device for positioning a multi aperture optical unit with multiple optical channels relative to an image sensor comprising a reference object a positioning device and a calculating device. The reference object is arranged such that the reference object is imaged in the optical channels onto an image region per channel by the multi aperture optical unit. The positioning device can be controlled in order to change a relative position between the multi aperture optical unit and the image sensor. The calculating device is designed to determine actual positions of the reference object in images of the reference object in at least three image regions and to control the positioning device on the basis of a comparison of the actual positions with positions.
The present invention relates to a device and a method for relative positioning of a multi aperture optical system with multiple optical channels. The above this invention also refers to a technique for active alignment of a multi aperture lens to a digital image sensor.
In the production of high resolution miniature camera modules, an active alignment process, i.e. the active ISIA world-tung of the lens in relation to the image sensor under observation and evaluation of the resulting image, is performed for the step of integrating lens. Is moving the lens relative to the image sensor and the resulting image is accordingly given above quality criteria of image sharpness (typically the image contrast or modulation transfer function measurement [short: MTF] at various points in the image) evaluated. The placement is optimized for example by maximizing the appropriate quality criteria and the lens in this position accordingly fixed to the image sensor (such as by gluing). A necessary prerequisite is sufficiently measurable change in the criteria for the Gütek used properties of the lens (such as image contrast, MTF) about the shifts of position used in the process, as it is known for example from the US 2013 / 0047396 A1 and the JP 20070269879.
Conventional optimisation algorithms fail active alignment, if the parameters of the objectives relative to the positioning steps vary only slightly. The latter applies to such as lenses with large depth of field (and in particular multi aperture lenses from micro), where changing the z distance between lens and image sensor will only too low and in case of real hard measurable changes of image sharpness.
Due to the rotation-symmetric structure of conventional objective such Gottwald cameras to the optical (z) axis have industrial assembly machines usually about five degrees of freedom (and according to five axes) to the relative positioning of the lens to the image sensor (3 x translation along the x, y, z axes + 2 x tilt)
[tx.ty] to x- and Y-axis, as it for example in Fig. 18 is shown). The established active Assembly processes and machines are thus not the alignment of lenses suitable for that have no rotational symmetry around the z axis. These include, for example, anamorphic lenses, lenses with direction-selective filter components, but also multi aperture lenses consisting of micro.
Fig. 18 shows a schematic picture of an Assembly building a multi aperture optical imaging system 12 to image sensor chip of 16 with a description of the necessary degrees of freedom, x, y, z (translation), tx.ty and tz (rotation).
The two described constraints are short in combination for multi aperture picture objectives, how about so-called electronic Clus-teraugen, as they where 201 1/045324 A2 are known, the applicable multi aperture optics. The multi aperture arrangement consists of a one - or two-dimensional extended array of optical channels, where each optical channel captures a defined part of the entire object field in the x-y plane.
The location of the center position of the aperture of each individual optical channel to the center of the associated part image (as seen in x-y plane) plays a be - in particular role for the accuracy of the reconstruction or the resolution of the picture. The difference of the center position of the aperture to the center position of the associated part image (Pitchdifferenz) along the translation degree of freedom in x, y with an accuracy of between half an hour must be set a pixel pitch of image sensor used.
This arrangement of a multi aperture optics has been developed specially for the realization of miniaturized camera modules in particular with ultra thin design (for the purpose of use in thin devices such as Smartphones, tablets, laptops etc.).
Micro lenses are accordingly in it with very small focal lengths (such as f = 1. 2 mm) and thus large depth of field. According to the formula dz = 4 * such as a value of double room = l*(FI#) 2 (dz) for the sharpness depth is in screen space with a diffraction-limited image of the wavelength W, 12.7 μιη for light of wavelength 550 nm and 2.4 = an F-number of FF# reached.
Fig. 19 illustrates schematically the requirements regarding the orientation of multi-aperture optics 12 to an image plane loading of the image sensor 16. The multi aperture optics 12 has several optical channels, which are arranged in a one-dimensional or two-dimensional array and have a Center. Optical channels, which are located outside of the Centre, are trained to receive an oblique incident main Jet HS. You can see that at oblique incidence of the light of the main beam of the Middle field point within an outer optical channel with the angle alpha "a" the intersection point with the focus position (= such as temporary location of the image sensor during installation) due to the difference between the z position (, Δζ) within the depth of field is experiencing a lateral offset ("ad"). With a pixel pitch of the image sensor by p_px = 2 μιη and the large maximum lateral offset, the value for Δζ must"in accordance with the geometric relationship of tan(a) = Ad / Δζ at an angle of α = 25 ° maximum Δζ = 4.3 μιη behavior. This value is within the range of depth of field, so as to allow existing based on the evaluation of image contrast, active Assembly procedures applying to multi aperture picture optics no sufficient accuracy of the alignment of the lens to the image sensor. Thus shows fig. 19 a schema-where tables cut through a multi aperture picture lens according to 201 1/045324 A2. The main beams of the Middle angles of view of optical channels are shown. The magnification shows the lateral offset ad of the middle of part of an outer optical channel due to the different focus areas Δζ within the image-sharpness depth range, and the incidence angle α of the main beam HS.
To illustrate, a numerical example is listed below.
Camera parameters such as a focal length have (r) of 1, 2 mm, a pixel pitch (ppx) μηη 2, a field with an opening angle of 59° horizontal, 46° vertical (0 ° diagonal) on. A maximum angle of incidence (s) on the image plane is 25°. Dimensions of the micro ^ H x W): 7.75 x 4.65 mm.
This leads to corresponding off alignment tolerances as follows: A tolerable shift in the x-y plane is up to 2 pixels, which means Δχ .s 4 μηη and y < μπι 4. A tolerable twisting to x-, y-axis (wedge error) cannot exceed a half pixels, i.e..
Δίχ = Linda (^) < 0.05 ° and ty = Linda (^) < 0,05 °. A tolerable twisting to z
A pixel is a maximum in the outer channel, axis i.e. Δί2 = Linda (^) < 0.03o. A shift maximum of one pixel in Z-axis (spacing error) offset (ad) in outer optical channels-> Δζ = - 4.3 to.
Tan(a)
Acquaintances are as active alignment method for the alignment of an image sensor Optics (Engl.: Active alignment) known and try to set an adjustment of individual lenses or entire assemblies to an image sensor depending on the quality (or most of the contrast) of each captured image.
Known devices for active camera lens alignment primarily refer to an Assembly of rotationally symmetrical optics, a so-called 5 d active alignment, to an image sensor in a production environment, and in large quantities. Such devices and mounting method used are not on the needs of the active Mon day multi aperture lenses can be modified. For example, an accuracy of the built-up axes is too low. For example, is described in [1] that an x, y, z translation with an accuracy of ± 5 μιη and a tx, ty, or tz twist with an accuracy of ± 0.1 ° is adjustable, which is insufficient according to the preceding numbers example for a multi aperture optics. The insufficient precision of monta geprozesse is based used camera bords on an evaluation of image contrast, a closed system environment and a lack of access to the control of the positioning system and reading out. For example, the same test pattern provided always the device from a manufacturer, regardless of which client (optics manufacturer) uses the device.
A mounting system which uses a combination of passive and active orientation is known from US 2013/0047396. This system has the same limitations as previously described.
A process of active camera optics mounting multiple camera modules using the evaluation of image contrast is known from the JP 20070269879. Also, this procedure is for the needs of multi aperture lenses difficult or impossible to customize.
Alternative concepts describe an active lens holder. Alternatively to the active alignment and fixation figure lenses can be mounted in brackets, which allow a subsequent variable positioning between the lens and the image sensor, as described, for example, in the U.S. 201 1/0298968 A1. Additional feedback to the image sensor, a unit or a sensor very light an active function, such as auto-focus or an optical image stabilization. The necessary structures are prohibitively expensive, according to and limit the miniaturization of camera modules. In the field of miniaturized multi aperture optics or extremely miniaturized multi aperture cameras, a use of such micro-mechanical components so far for reasons of cost and to reduce the size of the building is not known.
Therefore, a concept that allows a production of multi aperture camera devices which have an increased image quality and lower production tolerances would be desirable.
Therefore, the object of the present invention is to create a device for positioning a multi aperture optics, which has a high image quality of the manufactured camera module and low manufacturing tolerances.
This problem is solved by the subject-matter of independent claims.
The core idea of the present invention is to have recognized that the above task this can be solved, that the positioning of multi aperture optics to the image sensor can be based on a reference object captured by the image sensor, with an orientation of multi-aperture optics compared to the image sensor based a reference object or a reference pattern the reference object in the image of the image sensor is shown on positions, where that can be done with a high precision. A comparison of is positions with positions as global or local centres of the image sensor, enables an adjustment based on the comparison of the position.
In accordance with one embodiment, an apparatus for the relative positioning of a multi aperture optics includes a reference object, a positioning device and a calculating device. The reference object is arranged so that the reference object is mapped through the multi aperture in the optical channels on a frame range per channel. The positioning unit can be controlled to change a relative location between the multi aperture lens and the image sensor. The calculation device is trained to determine is position of the reference object in images of the reference object in at least three areas of the image and to control based on a comparison of is positions with positions of the positioner. The positions it can be in-game wise Middle positions or other reference positions in a particular or in other areas of the image. Alternatively or Additionally it can be in the positions to target positions, that are stored for the comparison. Based upon comparison with regard to the three areas is a high image quality, a small deviation of the position and consequently a high tolerance of the overall device of for several or even all areas be achieved.
Another running example creates a device in which the calculation device is trained to control a fixation device that is trained to cure an adhesive that is arranged between the multi aperture optics and image sensor. This allows the relative position between the multi aperture optics and image sensor fixation.
Another running example creates a device in which the image sensor are arranged at least one inner image area and four outer areas, radially distributed around the inner area of the picture, is. The four outer image areas are arranged along a roll axis, about an X-axis, and a Nick-, about a y axis. The outer areas of the image are arranged opposite sides parallel to the roll axis and parallel to the axis of the Nick per pair, for example, in a rectangle. The calculation device is trained to determine a sample variance of a pattern in the inner and the outer at least four areas of the image based on the comparison of is positions with the positions. This allows a centering of the test image in the inner area of the picture and a subsequent adjustment of the individual images in the outer areas of the image, so that location deviations can be reduced with regard to the roll axis, the Nick axis and a yaw axis beneficial taking advantage of Symmetries of the position deviations.
Another running example creates a device where the calculation device is trained, an image that is captured in the inner area of the picture by the reference object, to focus, which means that at a magnification value distance magnification clearance to a lateral difference of position for the inner screen area based on a pattern of deviation along the roll axis and to determine the Nick axis and so control the positioning device , that the lateral differences regarding the roll-axis and the Nick-axis achieve a respective nominal value, so the image inside the image area will get focused and centered. The calculation device is also trained to determine a level for wedge error differences of sample intervals for the four outer area and to control the positioning device so that the multi aperture optics for the roll-axis and the Nick-axis is tilted so that the wedge error difference reaches a roll value and/or a Nick value. The calculation device is also trained to determine a rotation difference for pattern deviation for the four outer area along a first local and a second local lateral direction of the respective outer image areas and to control the positioning device so that it turns the multi aperture optics around the yaw axis, so that the rotation differences reach a rotation value. The calculation device is also trained to determine a measure of a magnification difference of the sample variance for each of the outer image areas along a direction parallel to the roll axis, and along a direction parallel to the axis of the Nick and so control the positioning device, this moves the multi aperture optics along the yaw axis, so that the magnification differences reach a magnification value.
It is beneficial in this design example based on the focusing and centering of the image regarding the inner area of the picture an orientation of multi aperture optics with regard to the image sensor in six degrees of freedom to the inner area of the picture allow, so that a high positioning precision is achieved.
According to a further embodiment, the calculation device is trained to perform the focusing and centering of the image regarding the internal image area before an or any orientation with regard to the outer areas of the image, so that each of the wedge error, rotation error and/or magnification errors of the outer image areas regarding the inner area of the picture is reducible.
It is beneficial to this embodiment a positioning precision is further increased.
Another example of execution creates a procedure for relative positioning of multi aperture Optics Optical multichannel relative to the image sensor.
More advantageous forms of execution are the subject-matter of the dependent claims.
Preferred embodiments of the present invention are discussed below reference taking on the enclosed drawings. It show:
Fig. 1 a schematic block diagram of an apparatus for the relative positioning of a multi aperture optical system with multiple optical channels relative to an image sensor in accordance with an embodiment;
Fig. 2 a schematic block diagram of an apparatus to the device of Fig. 1 is so advanced that a calculation device is trained to control a fixation device in accordance with an embodiment;
Fig. 3A a schematic cut side of multi aperture optics, which has a position error along the negative roll direction compared to the image sensor in accordance with an embodiment;
Fig. 3b schematic supervision of the situation of Fig. 3A in accordance with a Perfor rungsbeispiel;
Fig. 4A a schematic cut side of multi aperture optics, which has a wedge error Nick axis compared to the image sensor in accordance with an embodiment;
Fig. 4b a schematic supervision of the situation of Fig. 4A in accordance with an embodiment;
Fig. 5 a schematic supervision on the multi aperture optics, which compared
Image sensor by an angle around the yaw axis and the z axis is twisted in accordance with an embodiment;
Fig. 6a a schematic cut side of multi aperture optics, which has a low distance along the axis of greed over the image sensor in accordance with an embodiment;
Fig. 6b a schematic supervision situation of Fig. 6a in accordance with an embodiment;
Fig. 7A a schematic page section view of a situation in which the Multiapertu-roptik to the image sensor a too big distance has in accordance with an embodiment;
Fig. 7B schematic oversight on the situation of Fig. 7A in accordance with an embodiment;
Fig. 8 a schematic flow chart of a process to resolve a misalignment of the multi aperture lens compared to the image sensor by a x-translation or a y-translation, as it for the Fig. 3a and 3B is in accordance with an embodiment;
Fig. 9 a schematic flow diagram of a procedure that can be run from the calculation device, the wedge error, as regarding the Fig. 4a and 4B are, to compensate in accordance with an embodiment;
Fig. 10 a schematic flow chart of proceedings to compensate for a
Twisting around the yaw axis or the z axis of the inner area of the picture, as it is for the Fig. 5 is described, be used in accordance with an embodiment;
Fig. 1 1 a schematic flow chart of a process for the orientation of multi-aperture optics by a translation along the z axis or along the greed - axis, as it for the Fig. described 6a, 6B, 7a, and 7B is in accordance with an embodiment;
Fig. 12 a schematic flow chart of a process, how's, for example, going ahead to one of the procedures of the Fig. Fig. 8 Fig. 9 10 or fig. 11 can run to a robust process
Procedure to enable in accordance with an embodiment;
Fig. 13 a schematic flow chart of proceedings in more advantageous
Way high positioning accuracy along the six degrees of freedom can be obtained in accordance with an embodiment;
Fig. 14 a schematic diagram to illustrate the linkages between the global coordinate system and local coordinate systems as an example for an area in accordance with an embodiment;
Fig. 15 a schematic diagram of the sampling in an object-level through a
Comprehensive multi aperture lens multi aperture optics and image sensor with a 2D-Anordnung of optical channels in accordance with an embodiment;
Fig. 16 a schematic page section view comprehensive multi aperture optics and image sensor to highlight the connections out fig. 15 in accordance with an embodiment;
Fig. 17A is a schematic page section view of a multi aperture optics, which adjusts to the image sensor in accordance with an embodiment;
Fig. 17b a schematic supervision of the situation of Fig. 17A in accordance with an embodiment;
Fig. 18 a schematic picture of an Assembly building of a multi aperture optical imaging system for image sensor chip. and
Fig. 19 a schematic page section view to illustrate the requirements regarding the orientation of multi-aperture optics to an image plane of the image sensor in accordance with the State of the art.
Before embodiments of the present invention in detail on the basis of the drawings closer will be explained below, it is noted that identical, same function or equal-looking items, objects or structures in the different characters with the same references are provided, so that the description of these elements represented in un different execution examples is interchangeable and can be used together.
The following reference is made to the alignment of a multi aperture optics and an image sensor with multiple areas of the image relative to each other. The relative alignment can be in principle in six degrees of freedom, a translation of along three directions x, y and z, as well as a rotation around the x-, y- and Z-axis describe.
Also refer to simplified understanding at an ideal alignment of multi aperture optics with regard to the image sensor to the parallel or coincident subsequent executions on a roll-axis, a Nick axis and an axis of greed, x-, y- and Z-axis of an inner image area in the room are arranged. While x, y, or z coordinates refer to each local coordinate system within an area of the image sensor. Roll, pitch or yaw coordinates or directions relate to a global coordinate system, in which the image sensor or the multi aperture optics are arranged.
The coordinate system of the inner area of the image sensor and the (global) coordinate system, which through the roll, pitch and yaw axis is determined, a same origin and therefore exhibit a same pivot point (Fulcrum), if for example the multi aperture optics to the global origin becomes twisted or moved. The coordinate systems are described as Cartesian coordinate systems, but also the basis of other coordinate systems is possible. These can be pinned by a coordinate transformation in each other. Described below are examples of execution can be carried out also in other coordinate systems without limitation of benefits on the basis of or implemented.
Fig. 1 a schematic block diagram shows a device 10 to the relative position of a multi aperture optics 12 optical multichannel 14a-c relative to an image sensor 16. The device 10 comprises a reference object 18, which is arranged so that the reference object 18 through the multi aperture optics 12 optical channels shown 14a-c on an image area 22a-c per channel.
The device 10 includes a positioning unit 24, which is controllable to change a relative location between multi 12 aperture optics and image sensor 16. Advantageous, the positioning unit is trained to move the multi aperture optics 12 according to six degrees of freedom in the region with regard to the image sensor 16. It is also conceivable that the positioning unit 24 is trained to move the image sensor of 16 in the room. It is conceivable that the positioning unit moves the multi aperture optics 12 or 16 along less than six degrees of freedom in the space image sensor.
The device 10 comprises also a calculating device 26, which is trained to in pictures of the reference object 18 in at least three image areas 22a-c is
To determine the positions of the reference object 18 and to control the positioning unit 24 based on a comparison of is positions with positions. The position can be reference positions, on which the reference object 18 in a calibrated State is mapped, about middle positions of the image areas 22a-c (local) or 16 (global) image sensor.
For example, the calculation device 26 is trained to receive the image in the image 22a-c and to evaluate. The image sensor can be a charge-coupled device (also known as: charge-coupled device - CCD), a metal oxide semiconductor complementary (Engl.: complementary metal-oxide-semiconductor - CMOS) or an other digital image sensor to act.
The screen areas can each 22a-c spaced on or be placed in the image sensor 16. Alternatively, the image areas 22a-c can be part of a continuous matrix of pixels, which are distinguishable from each other, for example, by means of a mutual different addressing of the respective pixels. For example, each image region 22a-c is trained to capture a segment of the reference object to 18. In each section, for example, a test pattern or part thereof can be arranged, so that the respective test pattern of the respective section in the respective - gen image area 22a-c is shown, with the test pattern, if it is arranged, can be ascertainable 22a-c for one, several, or all areas.
A defined orientation of two of the components of multi aperture optics 12, image sensor 16 and reference object 18, for example, a defined orientation and positioning of the reference object 18 with regard to the image sensor of 16 or the multi aperture optics 12 provides analysis a set image, which is collected when the multi aperture optics 12 with regard to the image sensor 16 has a correct position or alignment or is within acceptable tolerances of the reference object 18 22a-c in the image. The relative orientation between multi aperture optics 12 and 16 image sensor can be thus based on a comparison of is positions and (target) positions. This means that the calculation device is trained to control the positioning device based on a comparison of the actual position of an image area relative to is positions in different areas of the image.
Compared with a focus based on a contrast of the captured image this allows high precision, because the contrast based on a range of depth sharpness of multi aperture optics results in 12 to inaccurate or even erroneous results. A distance between the reference object 18 and the Biidsensor of 16 may, for example, be less than 2 m, less than 1 m or less than 50 cm. in principle the distance between the reference object of 18 and 16 image sensor can be application dependent, multi aperture optics 12 and/or a desired magnification or resolution depending on the design of the image sensor 16,.
Fig. Figure 2 shows a schematic block diagram a device 20, which is so advanced compared to device 10 that the calculation device 26 is trained to control a fixation device of 28. The fixation device 28 is trained 32 adhesive, which is 12 and the image sensor 16 arranged between the multi-aperture optics, to harden. For example, this can be when the multi aperture optics 12 relative to the image sensor of 16 is positioned, contacted by means of the adhesive of 32 with an image sensor of 16. The adhesive 32 can, for example, be a under ult-raviolettem (UV) light curable adhesive. The fixation device 28's can be, for example, to a UV light source, which emits UV light based on the to control the calculation device 26, to harden as the glue of 32. Alternatively it can be glue of 32 to a temperature curable adhesive, whereby the fixation device 28 as the heat source can be from. In principle the fixation device 28 can be also trained, to make another mechanical connection between the image sensor of 16 and 12 multi aperture optics, for example, a clip, a screw, a riveting and/or a solder joint.
It is beneficial because that an adjusted relative position between multi 12 aperture optics and image sensor 16 may without another intermediate step can be fixed and so add positioning errors can be prevented. Alternatively, the fixation device 28 can be also part of the device 20.
The reference object 18 33a c is arranged, in reference areas a pattern in the form of partial patterns or markings 35a c so per a subpattern 35a c by one of the optical channels 14a-c captured and is mapped to a specific image area 22a-c as a marker. This allows alignment of the image sensor of 16 to the reference model to the reference object 18 for a subsequent adjustment of multi aperture optics, where the alignment for example using optical laws and a divergence-free multi aperture optics can be done.
Use a test pattern to the reference object enables, for example, the evaluation of the image surfaces 22a-c by the calculation device 26 based on edge detection 22a-c in the image. Algorithms for this purpose are accurate and robust can be used. As marks to the reference object, for example, crosses, circles or H structures may be suitable, which follow a geometric arrangement. In principle also other structures can be arranged, but preferably such structures, which exhibit an edge length large compared with point structures. Although in previous versions an arrangement of markers was always described as a x configuration it is also conceivable that the markers in a constellation of star in a constellation of district or similar is done, the marker if necessary less and/or other image areas on the image sensor are projected in more. The previously described embodiments allow easy adjustment of the position requirements and evaluating the position deviations, so that various test patterns are easy to use.
Subsequent executions relate to control steps, delivered 26 on the positioning unit 24 by the calculation device to control it so, that the respective multi aperture optics with regard to the image sensor in the room be Walker will. These steps to compensate for errors are described in order that allows a precise alignment of the multi aperture optics with regard to the image sensor in six degrees of freedom in a more beneficial way. The positioning unit 26 can be Alternatively, trained to only one or more of the described error compensation steps run or to run them in a different order.
Fig. 3A shows a schematic view of the page cut the multi 12 aperture optics, which has a position error along the negative roll direction compared to the image sensor 16. Fig. a schematic supervision 3b shows this situation. In the Fig. the image sensor 16 3a is arranged on a printed circuit board 36 and contacted with this, so that the captured images of the image area can be obtained from the image sensor of 16 of the calculation device on the circuit board 36 22a-f.
A lateral position error along the negative roll direction leads to a change in relative position AR between the image sensor of 16 and micro centers of the optical device, i.e. centres 37 of the optical channels 14a-f are linear offset as an example by the position difference AR along the negative roll direction of multi-aperture optics 12.
The reference object has the structure of a test object. For example, Markie-in the form of one or more crosses are "+" as markers, arranged around the marks 35, to the reference object collected by means of the optical channels 14a-f 22a-e as a marker 38a-e in the respective image.
A coordinate origin of the coordinate system, which is spanned by the roll axis, the Nick-axis and the yaw axis, can be arranged in an origin of the local x/y/z - coordinate system of the inner image range 22e. A calculation device, for example, the calculation 26 is trained to adjust the marker 38e regarding the image area 22e focus. For this purpose the calculation device can be trained, a positioning device, to control, for example, the positioning unit 24 so that these 16 along the z axis relative to the image area 22e changes a distance of multi aperture optics 12 compared to the image sensor, so that the marker is focused 38e in the 22e. This means that the calculation device is trained to a level for a distance of magnification pattern distances of the actual position (location shown on the marker 38) for the inner frame range 22e to determine and control the positioning device that this moves the multi aperture optics 12 along the z axis or the yaw axis so that the zoom distance reached a magnification value distance. For example, the calculation device 26 can be trained, an extension of the pattern 38e of along one or two axes x and/or y 22e inner image area to determine and compare with a comparison value. The recorded pattern of the marker 38e is larger or smaller, so a gap between multi 12 aperture optics and image sensor 16 can be increase or decrease.
The establishment of the calculation is trained to determine a level for a lateral deviation of the actual position of the marker 38e for the inner frame range 22e in subsequently based on the pattern deviation, for example, with regard to the coordinate origin of the x - and Y-axis. This means that the calculation device is trained to a level for a lateral difference along the X-axis and a measure of a lateral difference for pattern deviation along the Y-axis to certain men. The establishment of the calculation is trained to control the positioning device so that the lateral differences reach a respective nominal value.
Simply means this, that 16 along the X-axis or the Y-axis (in the global coordinate system along the roll axis and/or the Nick axis) moves the positioner, the Multiaperturop-tik 12 or the sensor until the lateral differential setpoint is reached. For example, one or both lateral setpoint differential can be achieved by a projection of the marker 38e in the origin of the local coordinate system of the 22e photo area. The tolerance range can be defined, for example, through a tolerable deviation, about a shift to one or two pixels, or by an achievable accuracy. The achievable accuracy can be based, for example, on the distance of two pixels, so that captured a deviation of the projection of the marker 38e with regard of the coordinate origin of the 22e photo area, which is less than a pixel pitch and may not, sufficiently precise can be considered, so that the respective lateral differential setpoint is reached.
Between the PCB 36 and 12 multi aperture optics, the adhesive 32 is arranged, so that an adjusted position of multi aperture optics 12 with regard to the image sensor of 16 is fixable.
In other words, the show fig. 3A and 3B one shift of the lens by x-translation. A position error due to a y-translation can produce an equivalent result image in the corresponding section view.
All micro centers (Center of the dotted circles) are shifted in x - or y-dimension linear compared to centers of the respective image areas by a distance of AR on the roll axis. Targeting is done, where appropriate, solely on the basis of the particular image coordinates of the test object structure (i.e. the marker 38e) in the Central optical channel 14e, the coordinates x0 0, yo,o has, with XY and yy specifies a relative position of the respective areas of the image, as it e.g. for positions on the reference object for Fig. 15 is described.
First of all, the image of the test object structure in the Central optical channel is focused (translation along the z axis). Then the lens along the will means x - or moved along the Y-axis to the geometric center of the image of the Central test object in the Centre, in the origin of the global coordinate system O
the image matrix is. The appropriate image coordinate the test object structure, the following equivalent conditions can be met:
^ ^ 0 = (0,0)
0 -o = o
where, H describesj , for example, the radial coordinate of the bit field with the indices (i, j) in the global screen coordinate system.
Hmax, Hmax, f- imax, and r.jmax refer also the radial coordinate in the outer area of the image, the in + i, i + j, and j have a maximum position regarding the areas, in which the markers are mapped.
As the result of "zero" by the difference of the measured image coordinates in the Realfall if necessary, cannot be reached, rounding the result to a desired corresponding mounting precision size (magnification value distance or lateral setpoint difference) or a corresponding point value is either defined falls below the the difference due to the scheme, so that the differences within the tolerance range are. This also applies to the terms and conditions of the following fine alignment steps.
The Fig. 3A and 3B, as well as the alignment of the multi aperture optics with regard to the image sensor, described can be as alignment rough going forward to one, several or all of these adjustment steps.
Fig. 4A shows a schematic view of the page cut the multi 12 aperture optics, which has a wedge error AtN Nick axis compared to the image sensor 16. That is about the Nick axis, the multi aperture optics 12 compared to the image sensor of 16 to the angle ΔίΝ is tilted. Fig. 4B shows off a schematic of the situation supervisory fig. 4A. the test pattern to the reference object is centered on the central image range 22e and focused, which means that the marker 38e so to the 22e image is projected, that reaches the distance set value and the lateral setpoint difference with regard to the X-axis and Y-axis are. The wedge error causes that the marker 38a-d exhibit deviations in x - or y-direction.
The calculation device is trained to determine the displacements of the marker 38a-d centers, about the geometric center, the 22a-d picture areas. If, for example, the focus position of the multi aperture optics 12 compared to the image sensor has 16 errors, the wedge error can be this from the calculation device, that the distances of the marker 38a-e with regard to the midpoints of the image areas 22a-d are equal in pairs. A pair can be compensated, for example, with a twist of multi aperture optics 12 about the roll axis (the X-axis - tx), thus, that the calculation device to the positioner - controls that the multi aperture optics 12 about the roll axis is rotated until the distance between of the marker 38a and 38 c or 38 b and 38d with regard to the respective centers of the image areas 22a-d are the same.
In addition, a wedge error that is caused by a rotation around the Nick axis (the Y-axis ty), can be compensated as a result by the calculation device the positioner so controlling that this turns the multi aperture optics 12 around the Nick axis until the distances of the markers are the same 38a and 38B and 38 c and 38d with regard to the respective centers of the image areas 22a-d. This means that the respective distances of the marker 38a-d with regard to the midpoints of the 22a-d picture areas can have a measure of a finger error difference of sample intervals of the position regarding the respective outer image area 22a-d and that the calculation device is trained to determine this finger error difference. The wedge error differences can be changed by means of a tilting of the multi aperture optics 12 about the roll axis or the Nick-axis, so that they achieve a roll value or a Nick value which, as earlier described, can be arranged within a tolerance range to a null value. The wedge error compensation a rough alignment, for which can fig. 3A and 3B is described, previously run.
In other words, i.e. There will be a translation along the z axis for alignment of multi aperture optics 12 at a turning-hung tx to the X-axis or a twisting ty around the y axis, which means when a wedge error compensation, initially focused the image of the test object structure in the Central optical channel. The image is centered by a translation along the x - or Y-axis in the image origin O = (0,0). Different ra diverse distances of the measured position of the images of the test object structures in the corner channels, i.e. the outer areas 22a-d, to the respective image origin arise due to the wedge error. This can be corrected at least in part, by the multi aperture lens is rotated around the x or the y axis (roll axis or Nick axis), 22a-d are met until the following conditions for the outer screen areas:
rotation around X-axis (tx): r /'max,,/ max-"lelch ri < mmamx,m111a«v g ^ perhaps significant with»"
r i max, y max - r i max, - / max = 0 and r / maxj ma - - > r-i max max g j 3 maybe significant with
r-i maxj max - r - 1 max 7 max
rotation around Y-axis (ty): r ίΛ r_. synonymous with
' / max, y max - > ' max max
/ max*j-i maxj max max max, 0 and r = 7 max - 8 ) r j Max-j max g 3light significantly with
r / max j max - r-i max j max = 0
The wedge errors can be axis-symmetric about the roll axis (torsion about the roll axis) or as regards the Nick axis (torsion around the Nick axis) for the four outer area.
The Fig. 4A and 4B show the offset of the lens therefore by twisting around the Y-axis (y v error) - the rotation around the X-axis can produce an equivalent result image in the corresponding equivalent page view. The results of a misrepresentation to a positive or negative angle of rotation can be also determined analog previous versions or compensated.
Fig. 5 shows a schematic supervision on the multi 12 aperture optics, which is twisted to the image sensor of 16 by an angle δ to the yaw axis or the z axis of the central area 22e. The calculation device is trained, for example, to determine the distance of the marker 38a-d from the centers of the respective outer image areas 22a-d. The marker 38a-d have based δ a distance to the rotation by the angle of the respective centre. This is about the same along the respective x-direction for the image areas 22a and 22B. Both the distance for the image area 22c and 22d in x direction is the same. In y-direction of the respective areas of the image, the distance to the image areas 22a, 22B and 22 c or 22d is each about the same. A measure of the x spacing image areas 22a and 22B and 22 c and 22d and a measure of the distances along the y direction for the area 22a and
22A-d can be from the calculation device as a measure of a rotation difference for pattern deviation for each of the outer areas of the image 22 c and 22B and 22d.
The calculation device is trained to control the positioner so that they rotate the multi aperture optics 12 or image sensor of 16 around the yaw axis. The rotation difference δ can be reduced by means of rotation around the axis of greed, until they reached a rotation value is null, for example, within a tolerance range. The rotation error can be this rotationally symmetric 22a-d with regard to the origin of the global coordinate system for the four outer areas of the image.
In other words tz to the t-axis of the central area of the image, is meant to align with torsion that to correct z twist first focuses on the image of the test object structure in the Central optical channel (translation along the z axis) and then by a translation along the x - or Y-axis in the image origin O, (0,0) = centered. An equally large for optical channels positioned symmetrically to the Central inner area 22e 14a-d shift of the images of test structures 38a-d in the local coordinate system, resulting in the rotation around the Z-axis, i. e.:
T / maxjmax = f- / max - jmax = V (max,-ymax = T / maxjmax with radial local coordinates)
ru = - ^ u + yu' in the ieweil'9en outer optical channel 14a-e index (i, j) or the associated image section 22a-e.
Fig. 6a shows a schematic view of page section multi aperture optics 12, which has a low distance G compared to a nominal value of G,socompared to the image sensor 16 n along the yaw axis. The distance G can 16 applied to surface of a falschlichtun terdrückenden structure 39 12 multi aperture optics and one which wrong suppress light - the structure 39 facing surface of the image sensor of 16 relate the image sensor at multi aperture optics of 12 on a distance between a. Alternatively the distance G can facing surface of the image sensor of 16 and the other reference level of multi-aperture optics 12 also refer to a distance between one of the multi aperture optics 12, about a lens level facing the object field or an image sensor or other reference level. In addition, can the distance G on a different level of reference regarding the image sensor 16 refer to, about one
Surface of the image sensor of 16-32 Board is arranged. The nominal value of Gsoii may on the rear cutting width of multi aperture optics 12 or on the distance G between multi 12 aperture optics and image sensor 16 refer, in which a desired or optimal sharpness of the image projected in the image plane can be obtained. The setpoint Gsocan n be considered distance value. Alternatively or in addition the setpoint Gsocan refer n to any other target value of distance between multi 12 aperture optics and image sensor 16. A deviation, as a difference between the distance value Gson and every G can with a distance difference AG, e.g. represented by AG = G-Gton or n - G, AG = Gsobe described. The distance difference has a value not equal to 0, this can cause that meant a definable zoom error, the object range is mapped in a too big or a small image.
Fig. 6B shows a schematic supervision on the 12 multi aperture optics and image sensor 16 for this situation. Compared with a distance of G, which correctly set is, so that e.g. the distance difference AG, as a value of 0 is, can based on the low distance of G and hence a distance difference AG with a value not equal to 0 (roughly less than 0) the reference object that has the marker 38a-e, are shown enlarged or shown. As a result, that along the global roll-axis and Nick axis markers, which are shown in the outer areas 22a-d exhibit a larger radial distance to the center of the Central inner Bilderreiches 22e. Based on the respective local x/y - coordinate systems this means that the marker and 38 b is moved 38a within the image area 22a to negative x and positive y-values, the marker for positive x and positive y-values, the marker 38 c to negative x and negative y-values and the marker 38d toward positive x and negative y values. A corresponding shift is for along the respective x-direction for the area 22d or 22a, 22B and 22 c as well as along the y direction for the image areas 22a and 22B and 22 c and 22d in roughly the same, so that also here given local and/or global co - ordinate origin is a symmetry relating to it.
With reference to Fig. 6a is is trained the calculation device, to determine a measure of the distance difference AG by the calculating device for at least one, several, or each of the outer image areas 22a-d radial local coordinates, in which the respective markers mapped 38a-d. A departure from the null value, this means that the respective marker 38a-d also
transactions of the Centre (x = 0, y = 0) 22a-d the image area is positioned, can as a measure of the distance difference AG of the pattern deviation by means of the calculation are determined. The calculation device is trained to control the positioner so that these moves the multi aperture optics 12 along the yaw axis, so that the spacing differences AG of the image areas 22a-d n reach the distance value Gso, e.g. by varying the distance or changed 38a-d in the centers of the image areas 22a-d is mapped to the marker. The distance difference setpoint can be arranged, for example, in a tolerance range for the null value for the distance difference AG or in the tolerance range of the setpoint AGasn. The distance difference AG can after a compensation of tilting errors, as it as for the Fig. 4A and 4B describe is, regarding the outer areas 22a-d be the same
Fig. 7A shows a schematic page section view of a situation where the mul-tiaperturoptik 12 compared to the image sensor 16 has a too large distance G compared to the value Gton, i.e. the marker 38a-d are moved in the direction of the inner area 22d. Fig. a schematic oversight pointed 7 b the situation of Fig. 7A. the calculation device is trained, to control the positioning device that they so moves the multi aperture optics 12 or 16 image sensor, that the distance between G and thus a measure of the distance difference AG is reduced and so the spacing differences AG reach the (distance difference) setpoint.
In other words, the difference between the nominal section width and is cutting width should be reduced as far as possible. For this purpose, determining the zoom in the image rich 22e can be used. Be the nominal focal length and thus the target value for the rear cutting width of optics not exactly achieved due to manufacturing tolerances can be measured after a rough alignment the magnification in the 22e and adjusted accordingly with the knowledge of the realized growth (or a focal length that is derived from) the test pattern for the nudge. An exact number value for the rear cutting width can be neglected if necessary.
For this the image of the test object structure in the Central optical channel is focused with an alignment with a translation along the z axis (correction of the spacing error) first gross (translation along the z axis) and then by a translation along the x - or Y-axis in the image origin O, (0,0) = centered. In the case of a small z spacing of the multi aperture lens to the image sensor, the images of test structures in the corners of the array (capped) larger global screen coordinates are moved. At a great distance, the shift reverses itself so that the images of test structures for (amount) small world image coordinates are moved. Therefore, a variation of the z distance takes place as long as the images of test structures in the Centre of the respective channels or the following condition, taking into account the tolerance range is met:
r-; .max, j max = r j .max, - i = y max r max, - r / max = y max, max = 0
Fig. 8 shows a schematic flow chart of proceedings of 800 to fix an offset of multi aperture optics to the image sensor by a x-translation or a y-translation, how of the Fig. 3A and 3B is described. The procedure of 800 has two procedure sections 810 and 850. By means of the procedure section 810, a position error along the X-axis or along the roll axis can be compensate-ized. By means of the procedure section 850 a position error with a translation along the y-direction or along the direction of Nick can be compensated, which can be in any other procedures section changed after the procedure section is 810 or 850, or 800 of the proceeding. The procedure of 800 can be started Alternatively 810 or the procedures rensabschnitt 850 with the procedures section, with subsequent versions as examples describe a start of process of 800 by means of the procedure section 810. This means that the procedures sections 810 and 850 and thus the correction of position along the x direction and the y direction is sequentially and thus the roll setpoint and the Nick value be achieved sequentially one after the other.
One-step 812 of the procedure section 810 is a focus of the Central optical channel or a subrange of the reference object with regard to the Central optical channel. One-step 814, which follows the step of 812 means, about the calculation device, a determination of the position of the Teststruk-tur, i.e. the marker, which in the inner area of the picture shown from P0 0 in each image. Determination is therefore based on global coordinates of the Central optical channel Ρ0.0 as for Fig. 15 is described.
One-step 816, following the step of 814, the determined position along the roll axis and along the local x axis as a seed will be deposited x0.0 in a reference value memory of the calculation device.
One-step 818 moves the multi aperture optics to the image sensor along the X-axis in a translation step. An increment of the translation step may be for example an increment of a motor or actuator, positioner or a control size to control the positioning device. One-step 822, following the step of 818, is determined the location of the test structure of P0,o in the inner area of the picture, as described in the step of 814.
In a comparison of 824, which follows the step of 822, the calculation device is from to compare the specific position with the origin of the global coordinate system O, for example, by a difference. Has a value not equal to zero the difference within a tolerance range (decision 'no') so the calculation device is trained, one-step 826 based on the initial value stored in the step of 816 a remaining increment to calculate and to switch to perform another translation step along the X-axis in the condition of 818. Is the difference in the decision of 824 a value within the tolerance range from zero (decision "Yes"), so the multi aperture optics to the image sensor along the X-axis or along the roll axis can be defined as aligned, so achieving an end to 828, from which can be changed in the procedure section 850. I.e. the step 818 is repeated if necessary until until the roll setpoint is reached.
One-step 852 of the procedure section 850 is a focus of the received image, about the marker 38e with regard to the Central optical channel, about the optical channel of 14e. A step 854, following the step of 852, determines the location of the test structure in the image. The certain location along the Nick- or the local Y-axis is stored in a step 856 as starting value y0,o.
A translation step along the Y-axis or the Nick axis runs one-step 858, which also follows the step of 854, i.e. a relative position between the image sensor and multi aperture optics will change along the Y-axis. One-step 862, which follows the step of 858, determines the location of the test structure in the inner area of the picture again. As it is described for the decision of 824, takes place in a decision of 864, 862 follows on the step, a comparison that the posi-tion y0,0 with the center of the global coordinate system matches O. This is not the case, i.e. delivers an answer 'no' decision, is made in a step 866 a calculation of the remaining increment based on the position and the starting value deposited in the step of 856. Step 866 out is in the step of 858 changed back and running an another translation step along the Y-axis. This is done until the decision of 864 'yes' returns the result so that mul-tiaperturoptik regarding the sensor along the Y-axis can be thought of as directed and 868 in the procedure section 810 or 812 step can be changed in a single step. Alternatively, you can stop the procedure of 800 discretion 824 or 864, if this is answered with "Yes". This means that the calculation device is trained to the positioning unit based on a comparison of the actual position of an image area with a target position, about the origin, to compare to the picture pane.
In other words shows fig. 8 a summary of the fine alignment for centering. The process can be started in the x - or y-dimension equivalent either.
Fig. 9 shows a schematic flow chart of a procedure of 900 that can be run from the calculation device, the wedge error, as regarding the Fig. 4a and 4B are, to compensate. The procedure of 900 includes a pro rensabschnitt 910 and by means of the procedure section 910 of the wedge error with regard to the X-axis, i.e. about the roll axis reduced or be compensated can a procedure section 950... The wedge error with regard to the Y-axis, i.e. about the Nick axis can be reduced or compensated by means of the procedure section 950. The procedures sections are independently run, with from procedure section 910 in the procedure section 950, you can change, or by the procedure section 950, when it is going through, change in the procedures section 910 910 and 950. This means that the procedure of 900 with the procedures section 9 0 or can be started with the procedures section 950.
An example so describes the procedure of 900 in the following that started with the procedures section 910. One-step 912 is a focus of the Central optical channel, about the optical channel 14e regarding the image area 22e. The step can run immediately, as the step of 812. One-step 914, which follows the step of 912 is a centering of the Central optical channel by means of a translation in the x-y plane. The 914 can step right, such as the step of 814.
One-step 916, which follows the step of 914, is a position determination of test structures from corners in the image, which means that, for example, the outer reference marks, for example, the markers are determined 38a-d with regard to their respective, exercise-ren image areas and their position in it. The specific locations are stored in a step 918 as starting values for the subsequent positioning. A start value rimaxJmax, r, imax-jmax, r.imaxJmax and r.imax,. jMax can the position of the test structure in the outer areas of the image with the maximum (or negative maximum) direction along the roll-axis (i) or Nick axis (j) describe each.
Starting step 916 the positioner is controlled one-step 922 so that the multi aperture optics to the image sensor is rotated around the roll-axis angle step. One-step 924, which follows the step of 922, position determination of test structures from corners in the image, performed as it was run in the step of 916. In a decision 926, which follows on the positioning in the step of 924 is a matching, if the RADIUS distance or the difference rimaXjmax - rimaXi. [ jmax within the tolerance range has a value of 0 and whether a difference r.imaX] max -r_imaXi. jmax within the tolerance range has a value of 0, which means that it is determined whether the measure of the wedge error difference reaches a roll value or a Nick value.
The decision 926 'no' answers, which means at least one roll setpoints or Nick setpoint is not reached, takes place in a step of 928 a calculation of the remaining step size, taking into account the step 918 behind - starting values increased. Starting from the step 928 922 is changed in the step back to a new rotation angle step to perform the roll axis. Is, however, the decision 926 "Yes" answers i.e. both setpoints are reached, provided the wedge error regarding the rotation about the roll axis as a compensated and by a final State of 932 based 950 will be passed in the Verfahrensab section or the proceeding.
One-step 952 of the procedure section 950 focusing the Central optical channel is carried out as described in the step of 912. One-step 954, following the step of 952, a centering of the Central optical channel is carried out as described in the step of 914. One-step 956, which follows the step of 954, a position determination of external test structures from corners in the image takes place, how
It is described for the step of 916. Based a deposit of the starting values in a step 958 is the step of 946, as described for the step of 918. One-step 962, which follows the step of 956, the positioner is controlled so that the multi-aperture optics Nick axis rotated (tilted) is. The be - suggests that also this step analog, runs to the procedure section 910 or the step of 922 with the difference that the rotation around the axis of the Nick is. One-step 964, which follows the step of 962, a position can be obtained, as it was carried out, for example in the step of 956, to a position change, which was achieved in 962 by the step to determine.
A decision of 966 verifies whether the wedge error differences have reached the Nick setpoint. Imaxjmax - r.imaXjmax and nmax-jmax-r.imax be .jmax this can, for example, through the formation of a differential r. The differences can be checked whether they occupy the value 0 in the tolerance range, which means that the respective deviations meetings of a difference education rimaXijmax, rimaxjmax, rimax_jmax and r_imax,. jMax are equal. We answered the decision with "no", so a calculation of the remaining increment is performed in one step, taking into account the starting values of the step 958 968 and switching into the step of 962 back again turn the multi aperture optics to perform the Nick axis. The wedge error difference reaches the Nick value in determining 966 (decision "Yes"), so the Nick wedge error as offset can be seen and of the proceedings or in the procedures section 910 will be changed.
Fig. 10 shows a schematic flow chart of proceedings to the Kompensati on a twisting around the yaw axis and the z axis of the inner area 22e 1000. The procedure 1000 can compensate for an error condition, as it is for the Fig. 5 is described, be used. One-step 1002, a focus of the Central optical channel takes place as described for steps 812, 852, 912, and 952. In a step 1004, a centering of the inner area of the picture takes place as described for the steps of 814, 854, 914 or 954. One-step 1006, 1004 following the step, determine the positions of the test structures, i.e. the marker also 38a-d, from corner points in the image. The positions are, for example, as in the respective area, about one in the outer areas 22a-d locally determined and as starting values (x, y)imaxJmax(X > y) imax,-jmaxi (x, y) - imaxjmax and (x, y) - imax,-jmax stored (step 1008).
One-step 1012, the positioning unit is controlled so that the multi aperture optics with regard to the yaw axis or the z axis of the inner area of the picture performs a rotation of one angle step. For example, an increment of an engine or an actuator, which moves the multi aperture optics, or a control rungsgröße of the positioning device can be the angle step.
One-step 1014, 1012 following the step, a renewed position can be obtained, as 1006 described for the step. In a decision 1016 1014 following the positioning, verifies whether the rotation difference reaches a rotation value, such as the formation of a differential x-imaxjmax - imaxjmax 0, x = imax,-jMax- Ximax, - jmax- 0, - imaxjmax- y-imax - jmax- 0 and or yimaxjmax- yimax,-jmax- 0, here too the 0 value is tolerance. At least one of the equations is not met, i.e. the decision 1016 provides the answer "no", in the step 1018 switch, in which a calculation of the remaining increment takes place, taking into account the starting values 1008 deposited in the step. Of the step based on 1018, 1012 back changes to the step and run a new rotation of multi aperture optics. All equations are true, however, in the decision of 1016 d., the decision gives the result "Yes", so can h. rotation error compensated are regarded as and the 1000 1022 are proceeding in one step. 1022 step the zoom error, you can change, for example, in the compensation by a translation of the multi aperture optics along the z axis or the yaw axis.
Fig. 11 shows a schematic flow chart of a process 100 to the orientation of multi-aperture optics by a translation along the z axis or axis of greed, how of the Fig. 6a, is described in 6B, 7a and 7B.
One-step 1102 is a focus of the Central optical channel. Step 1 104 1102 following the step, a centering by means of a will be Transla-tion in x/y, as described, for example, for the step 914.
In a step 1 106, 1104 following the step, a positioning of text structures from vertices in the picture, with the position determination using the respective local coordinate systems in the outer areas of the image may be 22a-d takes place. The specific positions be one-step 1108 as starting values r'. jmax, r imaXi- .-jmax, r'-imax, jmax, and r'imaxdeposited jmax . A translation along the z axis or axis of greed runs in one-step 1112, step 1106 basis, i.e. the positioning unit is controlled so that the multi-aperture optics along the yaw axis is moved.
One-step 1114, 1112 following the step, a new local positioning is carried out as 1106 described for the step. In a decision of 1 16 checking is done, whether in the step 1 114 certain positions the respective local coordinate origin, in the form of an equation r'. imaXjmax = r '- imax - jmax = r'imax,-jmax = r'imaxjrriax = 0. This means that it is checked whether a distance difference reaches a distance value difference. A degree can be obtained for the distance difference here, for example, by means of the difference (distance) between a detektier-th place on the respective test pattern is projected, and the local coordinate origin. The decision returns the result "no" 1116 a calculation of the remaining step size, taking into account the starting values is done in one step 1118, in step 1 filed 08. The step 1 118 is changed, for example in the step 1112 back, to execute a renewed position change of multi aperture optics with regard to the image sensor. The decision 1 116 gives the result "Yes" the magnification errors can i.e. about the deviation AG the yaw axis are regarded as compensated and 1100 with the will of the proceeding. For example, the procedure can be initiated by a final step 1122 1100 a fixation of the lens.
Fig. 1 can be described as a summary of the overview of the fine adjustment of the translation along the z axis.
Fig. 12 1200, shows a schematic flow chart of a process as it, for example, going ahead to one of the procedures of 800, 900, 1000 or 100 can be run to allow a robust procedure of this procedure. One-step 1202 is a rough alignment of the multi aperture lens, i.e. the mul tiaperturoptik with regard to the image sensor. This may include, for example, an alignment of the image sensor on the test pattern that will be projected the Testmarker 38 on the corresponding image area 22 of the image sensor. In addition, the multi-aperture optics can be arranged so that the markers still are projected onto the screen areas. This may be supplemented, for example, a step of 1204, 1202 following the step, expressed in the an orientation of multi aperture optics with regard to the image sensor by an alignment in the x/y plane or the roll / Nick level
leads to the highlighter in the areas of the image are mapped. One-step 1206 is a focus of the Central optical channel.
One-step 1208 1206 following the step, is a provision of the zoom irrigation in the Central optical channel or the inner area of the picture. This can be done, for example, through a survey of the image (size is) a test object, i.e. of the reference object. Since the optical properties of multi-aperture optics, as well as the distances between the reference object and the areas of the image are known, this can be done on the basis of optical laws. A decision of 1212, 1208 effected on the step, verifies whether the certain enlargement corresponds with the chosen design of the test pattern. The decision 1212 with 'yes' answer, is 900, passed in a step 1214, in the a nudge of multi aperture optics with regard to the image sensor, about by one or more of the procedures of 800, 1000 or 1 100 takes place.
1212's decision "no" produces the result runs 1216 an adaptation of the test pattern in one step and then changes to the step 1214. It can be thus determined whether the test is suitable for the respective sensor and/or the multi aperture optics. An adaptation of the test pattern may include for example an Än request of one or more positions or shapes of the pattern, so that the test pattern in the areas of the image can be projected.
In other words, the active alignment process of multi-aperture optics to image sensor based on the evaluation of the relative and absolute positions of the images produced by the object structures through the individual optical channels in the image matrix is done.
For the practical implementation, the Optics module first roughly aligned to the image sensor and a fokussierf.es picture in the Central optical channel set. In the next step, the magnification is m in the Central optical channel through the surveying of the image size of a test object B (object size in the image: B = number of pixels along the edge of the ge reasonable object * pixel pitch) according to the well-known m = - determined formula.
The G is in the subject size, so the known extent of the test object in the object plane. This is determined according to the object distance (s) with the parameters of the multi aperture lens (for example, size of the Visual field of optical channel) known from the optical design. Calculated from the position
classified results (f) of the Central optical channel according to the focal length that is actually resulting in the manufacturing process:
In this form of the equation the subject distance (s) with a negative sign must be used.
The real focal length (f) of the central channel can but also previously be determined by other methods (such as an auto-collimation procedure, optical probing or non-contact profile measurements, etc.) or be already known. In deviation of the real focal length of the focal length sought by the optical design, a scaling of the geometric distribution of the Middle angles of view within the plane of the object occurs when the fo kussierung of multi aperture lens. Thus the prerequisite for the active alignment placement of object structures must be adapted in this case (see Figure 7). The discovery of new points of intersection of the Middle angles of view of optical channels with the object level can be done by changing the focal length on the real value of the optical design (such as raytracing simulation software).
In other words shows fig. 12 a summary of expiration of Vorbe preparation of the fine alignment process. By means of the procedure of 1200, the reference object is arranged so that the reference object is mapped through the multi aperture in the optical channels on a frame range per channel.
Fig. 13 shows a schematic flow chart of proceedings 1300, which in pre teilhafter way positioning inaccuracies be reduced along the six degrees of freedom or compensated. In a first step, the procedure is running 1200 to the rough alignment of multi aperture optics with regard to the image sensor. The 800 is done in following the procedure of 1200, so a centering by means of translation in the x-y plane is carried out. Following the procedure of 800, a wedge error compensation along the roll axis is done by executing the procedure section 910. The procedure section 950 to compensate for the wedge error Nick axis runs in following the procedures section 910. The procedures sections 910 and 950 can be performed in a different order and form
together the procedure of 900. Following the procedure the procedure 900 is 1000 to compensate of z twist (or greed - torsion) run. The procedure is carried out following the procedure 1000 1100 to correct of the spacing error. Following the procedure of 1100, a fixation of 1302 of the lens can be. In other words, the multi aperture lens in the aligned position can be fixed following the entire process of fine tuning, for example through a bonding of the housing and the circuit board.
Alternatively the procedure can run 1300 part method with a modified order of each. Alternatively or additionally also only one or more of the procedures of 800, 900, 1000, 1100 or 1200 can be carried out.
In other words the contacted already on a printed circuit board and readable image sensor before (see example illustration in Figure 3) is located at the beginning of the installation process the previously assembled multi aperture lens if integrated in a light-tight enclosure and separately. The image sensor is positioned for the process of active alignment so that the connecting line between the center of the image field (= geometric center of the pixel matrix) and focus on the object level (= test pattern level) is perpendicular to the image plane and thus corresponds to the normal on the image sensor. This is beneficial by the inclusion of the image sensor or it's the circuit board on the integrated, fulfilled at least in good approximation. To the active alignment procedure, the following prerequisites on the mounting device can be. The mounting device includes beneficial erweise a device to collect from the test pattern-oriented of the image sensor on board including a selection interface; a device for the recording of the multi aperture lens (E.g. gripper, mechanical, pneumatic, via vacuum etc.); a device to change the relative position of the lens to the image sensor in six degrees of freedom (translation in x, y & z directions and rotation x, y & z axis) with can; adjusted for the three degrees of rotational freedom a joint Center of rotation (pivot point) near the center of the multi aperture lens a test pattern or screen a pattern projection at an appropriate distance (= distance to object) from the multi aperture lens, or sufficiently homogeneous; illuminated a picture selection and image processing device with interface to control the controller/motor to change the relative position of the lens to the image sensor (E.g. a PC with evaluation & control software); and an algorithm for image segmentation, object recognition and positioning of the structures formed by the test pattern through the multi aperture optics on the image sensor.
Fig. 14 shows a schematic diagram to illustrate the linkages between the global coordinate system Σ and local coordinate systems Σ' an example of the image section 22a. As it is, for example, for the Fig. 3A and 3B is described, the global coordinate system Σ is the point of intersection of the roll axis, the Nick-axis and the yaw axis, where the common intersection point can also be a common rotation point (pivot point) of the movement to the six degrees of freedom, which initiated the positioning unit with regard to the multi-aperture optics. Regarding the image area 22e, the optical channel 14e multi aperture Optics is arranged, wherein the optical channel 14e has the optical Center 37e.
The image areas 22a-c each have a local coordinate system Σ' on an X-axis and a Y-axis with a z axis, their common intersection point is arranged in the geometric center of the image region 22a-c. The local coordinate systems Σ' can be for example, a Cartesian coordinate system, where be cut the axes x, y and z at a right angle to each other in the Center. A position of the marker 38, which is projected in the picture section 22a, can both with local coordinates y 'ij or x' y, as well as the global coordinates and/or xy who specified-the. The indices i, j can be, for example, indexes, which specify a numbering of the image areas 22a-d along the roll axis or the axis of Nick.
In other words shows fig. 14 a sketch describing the coordinates in the image plane of the multi aperture camera module in a supervision. The global Koordinatensys-tem in the image plane Σ has its origin in the geometric center of the image field, while the local coordinate system Σ' has its origins in the geometric center of the image field of each optical channel. It is a case where the image circles of four adjacent optical channels (dotted circles with center mark) are not optimally aligned to the screen fields mapped for each channel (squares) on the image sensor. The cross is shown in the left top optical channel represents the image of an object tree that is positioned in the object plane according to preset point as it is generated by the associated optical channel.
Fig. 15 shows a schematic diagram of the sampling 12 multi aperture optics and image sensor 16 with a 2D-Anordnung of optical channels in an object plane 44 through a multi aperture lens extensively. Mark the points Py -intercept
the respective average direction of each optical channel (i j) in case of error-free with the plane of the object.
The object level is shown, for example, that this in / direction with seven op-political channels and sampled in j-direction with five optical channels, that is, means i'= max 3, ax =-im- 3, jmax = 2, - 2 = jmax . In the places P.32, P3, 2, P-3,-2 and P-3, -2 can be arranged the marker 38. A marker can also be 38 at the place P0,0 are arranged. Alternatively, the marker to another location in the object area 44, or the reference object can be arranged with a described ma maximum distance between the markers is beneficial.
In other words, a two-dimensional arrangement of a multi aperture lens consists of an array of optical channels with (2*imax + 1) channels in x- and (2*jmax + 1) channels in the y dimension. As in Fig. 15 and subsequent fig. to see 16, each optical channel of the multi aperture lens has a different angle in object space (where 2011/045324 is known A2 also from the) or the various optical channels represent different areas of the object level. I.e. the points of intersection of the axes of the medium-sized perspectives of each optical channel (= specific optical axis) with the Objekteben result in a predetermined distribution-lung (the following fig. 16 (known from the construction)). For example, a grid with equidistant spacing is used in the case of a desired distortionfree figure.
On several (such as three or five) selected positions of these points of intersection with the plane of the object (for example, in the points P0,0, P-imaxjmax Pjmax.-jmax.) PIMax-jmax , PimaX, jmax) special object trees (such as crosses, circles, squares, etc.) are placed in the test pattern level. The selection of the centers of the object structures includes in the center of the object level (E.g. P0.0), at least a few beneficial erweise mirror-symmetrically positioned points or areas on the roll axis (such as P.imaxjmax with P-in theax,-jmax or PImax, _jmax with Pjmax.jmax) or at least a few beneficial erweise of mirror-symmetrically positioned points or areas on the Nick-axis
(For example P-imaxjmax it Pimaxjmax or P imax, - jmax it Pimax,-jmax)
The accuracy of the following individual steps of active alignment can be increased directly proportional to the distance of the two selected points in the plane of the object.
This maximum possible accuracy is achieved by a greatest possible distance between of the associated points of alignment. An angular deviation between the positions of PJj can be specified with an angle, for example, with the angle θ0,ι for divergence between the positions of P0,o and P0, i. Alternatively an angle θ0,has a deviation or a viewing angle difference of each optical channel between the positions of P0, o and P0_2 in-game wise. 2 on.
Fig. 16 shows a schematic view of the page cut comprehensive multi aperture optics 12 and image sensor of 16 to clarify the interrelations of Fig. 15. the angle £y with y - 2 = j, 2 have an angle on a normal that 46 is perpendicular to the plane, 46, which is parallel to the covered object space, compared to a level.
Angle city describe angle between the surface normal of the image sensor of 16 on the image sections 22a-e.
In other words shows fig. 16 a simplified section view of a multi aperture picture system. In this embodiment, the Multilapertur lens is (stack structure with micro lenses front and rear) (perpendicular connected with a plate for the ver-preventing optical crosstalk (black chip) grey, side) which is in the process of active orientation on the printed circuit board integrated in a housing, (green, below), the plated digital image sensor (Brown) located on the, is fixed. The average direction of each optical channel (i, j) in the object space is identified by the anglej #i. The average direction of each optical channel is determined by the optical design and caused by the optical properties of the respective associated lens (focal length, refractive index of the material, etc.) in the middle of the respective Mikrobildes the angle of incidence α ^.
The preceding description of the process flow of the active alignment is without loading restriction of the general public on the basis of Fig. 15 for an example of a multi aperture lens with 7 x 5 optical channels and cross-object structures in the points of intersection of the middle line of the optical channels in the four corners of the array (p.3iCi, P-3,-2, P3.-2, P3, 2) as well as the Central optical channel (P,0,0) represented. The figures 17a and 17B show the target position after successfully performed active alignment of the mul-tiapertur lens compared to the image sensor.
Fig. 17A shows a schematic view of page cut a multi 12 aperture optics, which is aligned, i.e. adjusted to the image sensor 16. The Fig. 17b a schematic supervision of this situation shows the Fig. 17A. the marker are 38a-e with regard to the respective - geared towards areas 22a-e with regard to the six degrees of freedom. A deviation of the places at which the marker 38a-d on the screen sections 22a-e are projected, is minimal on the respective local coordinate centers. In other words, the show fig. 17A and 17B is a target position upon successful Active registration. The grid of the graphs (dotted circles) is aligned on the grid of pixels of the image sensor. (Squares) i.e. each optical channel, the center of the associated image circle is located in the geometric center point of the corresponding micro field. The images of the selected object structures are symmetric in the geometric centres of the corresponding micro fields. Links: side view; right: Supervision.
The previously described embodiments provide increased precision over the application of established procedures and machinery to align figure optics, in particular by multi aperture optics for small end devices. Execution example enable the automation of the respective process of fine tuning to achieve rapid cycle times in the production process. In addition, an increased yield can be for the camera modules built in and thus a lower test and Committee costs, because a rapid alignment can be achieved with a high quality.
This means that execution examples specifically can be designed for active alignment on the architecture of multi aperture lenses with segmented field and allow the preceding Vorteilte as a result. Due to its ultra flat design and the low-cost manufacturing and Assembly technology are potentially multi aperture imaging systems for use in products of consumer electronics (about laptops, game consoles, or toys), and in particular for use in portable devices such as mobile phones, tablets, PDAs, and similar predestined (PDA = digital personal digital assistant, personnel assistant). Other application areas are, for example, the sensors, such as camera-like sensors, imaging sensors in production engineering. It is also to use in the automotive technology for optical safety sensors in automotive interiors, driving assistance systems,
such as reversing cameras or lane detection possible. Examples can be as well used in the area of security and surveillance, for inconspicuous environment cameras with large field or in buildings, museums, or objects. In addition, can examples be used, in robotics, as optical sensor navigation for optical control of hooks and/or component recording devices. Another field of application previously described examples of execution can be found in the field of medical technology, such as a use in imaging diagnostic techniques, such as endoscopy. An application of the previously described examples of execution is however not limited to this application.
Although previously described examples of multi aperture optics and/or image sensors that have a low number of optical channels about 5 x 7, describe the execution samples can be applied also other multi aperture optics and/or image sensors, which have, for example, more than 5, more than 50, or more than 500 optical channels.
Although the previously described examples of execution were described in a way that a calculating device performs a comparison of positions where the pattern in the image areas are mapped, on a local or global center of an area of the image, a reference point regarding which determines the displacement or torsion can be done also on an any other point.
Although associated with previously described performing examples show a two-dimensional arrangement of the areas described 22a-e, it is also conceivable that the screen areas 22a-e along a one-dimensional line structure are arranged. This means that one of the two indices i or j is one-dimensional and position determination can be based on three reference areas or areas of the image.
Although some aspects relating to a device have been described, of course, that these aspects are also a description of the relevant proceedings, so that a block or a component of a device as an appropriate step in the process, or as a feature of a procedural step to-stand is. Similarly aspects are associated with a, or a step in the process were described, as a description of a corresponding block or details, or characteristic of a corresponding device.
General embodiments of the present invention as Computerpro grams product with a program code may be implemented, where the code is so effective, perform one of the procedures, if the software product on a computer. The program code can be stored, for example, on a machine readable medium.
Other examples include the computer program to perform one of the procedures described herein, and the computer program on a machine-readable carrier is stored.
In other words, an embodiment of the invention process therefore is a computer program that has a program code to perform one of the procedures described herein, if the computer program on a computer. A further embodiment of the invention process is thus a disk (or a digital storage device or a computer-readable medium), is recorded on the computer program to perform one of the procedures described herein.
Another example of execution includes a processing facility, such as a computer or a programmable logic device, then immediately configured or is adapted to perform one of the procedures described herein.
Another example of execution includes a computer on which is installed the computer program to perform one of the procedures described herein.
Some design examples, a programmable logic component can (in-game wise an field-programmable Gatterarray, an FPGA) used, making some or all functionality of the procedures described herein. Some examples of execution, a field-programmable Gatterarray with a microprocessor can work together to perform one of the procedures described herein. Commonly performed procedures for some design examples on the part of any hardware device. This may be a general-purpose hardware like a computer processor (CPU) or specific to the procedure
Hardware, such as, for example, an ASIC.
The above examples represent only a demonstration of the principles of the present invention. Of course, modifications and variations of the arrangements described herein and details will light a other professionals. Therefore, it is intended that the invention only by the extent of the protection of the following claims, not by the specific details of which were presented on the basis of the description and explanation of the examples herein, is limited.
Patent claims
Multi device (10) to the relative positioning of an Muitiaperturoptik (12) ren paint optical channels (14a-f) relative to an image sensor (16) with the following characteristics:
a reference object (18), which is arranged so that the reference object (18) rich (22a-e) per channel (14a-f) is represented by the Muitiaperturoptik (12) in the optical channels (14a-f) on a Bildbe-;
a positioning device (24), which can be controlled to change a relative location between the Muitiaperturoptik (12) and (16) image sensor;
a computation device (26), which is trained to determine is position of the reference object (18) in images of the reference object in at least three areas of the image (22a-e) and based on a comparison of is positions with positions to control the positioning device (24).
2, device according to claim 1, in which the calculation device (26) is trained to control the positioning device (24) based on a comparison of the actual position of a range of image (22a-e) relative to is positions in different areas of the image (22a-e).
Device according to claim 1 or 2, where the calculation device (26) is trained, to compare the positioning device (24) based on a comparison of the actual position of image area (22a-e) with a set position regarding the image area (22a-e).
4. device in accordance with one of the preceding claims of the calculation device (26) is trained to control a fixation device (28), where the fixation device is trained, adhesive (32), which is arranged to harden between the Muitiaperturoptik (12) and the image sensor (16) or the Muitiaperturoptik (12) and a printed circuit board (36), on which the image sensor (12) is arranged.
5. device in accordance with one of the preceding claims, where the reference object (18) has at least three reference areas (33a-c), which exhibit a radiographic tag (35a-c), so that each a reference mark (35a-c) on one of at least three image areas (22a-c) is shown, where the calculation device (26) is trained to determine the actual position based on a position of the reference marks (38a-e) on the screen.
Device in accordance with one of the preceding claims, in which at least four outer image areas (22a-d) and an inner image area (22e) along a roll are arranged axis and a Nick-axis, where the outer areas of the image (22a-c) arranged in two pairs of opposite sides are parallel to the roll axis and arranged in two pairs of opposite sides parallel to the axis of the Nick are, where the roll-axis and the Nick-axis perpendicular to each other and perpendicular to the axis of a greed , the surface normals of the image sensor is arranged parallel to one, are arranged and where the inner area (22e) to an intersection (O) the roll axis, which has Nick axis and the axis of greed and where the calculation device (26) is trained, based on the comparison of the actual positions with the position deviation pattern a pattern (38) in the inner screen section (22e) and in the to to determine the four outer areas of the image (22a-d).
7 device according to claim 6, where the calculation device (26) is formed, a measure of distance (G) pattern distances the actual position for the inner area (22e) based to determine where the calculation device (26) is trained, so to control the positioning device (24), that it moves the multi aperture Optics (12) along the gier-axis, on the pattern of deviation so that the spacing (G) a distance value (Gton) achieved.
8 device according to claim 6 or 7 with the computation facility (26) is trained to determine a level for a first lateral deviation (AR) the actual position for the inner area (22e) based on the pattern of deviation along the roll axis, axis to determine a level for a second lateral deviation (AR) for the inner area (22e) based on the pattern of deviation along the Nick and so control the positioning device (24) , that the first lateral difference (AR) achieved a first lateral differential setpoint (0) and that the second lateral difference reaches a second lateral differential setpoint (0).
9 device in accordance with one of the claims 6-8 when the calculation device (26) is trained, a measure of wedge error differences (AtN) of sample intervals of the position for each of the four outer image areas (22a-d) to determine and control that the multi aperture optics for the roll-axis or the Nick-axis is tilted, so that the wedge error differences (ΔΐΝ) to the positioner (24) reach a roll value (0) or a Nick value (0).
Device in accordance with one of the claims 6-9, at the the calculation device (26) is trained to control the positioning device (24) so that the roll setpoint (0) and the Nick value (0) sequentially one after the other (910, 950) be achieved.
11 device in accordance with one of the claims 6-10, in which the calculation device (26) trained is, lateral direction to determine a measure of a rotational difference (δ) the pattern deviation for each of the outer image areas (22a-d) along a first local (x) and a second local (y) ever and so control the positioning device (24), that this revolves around the yaw axis multi aperture Optics (12) , so that the rotational difference reaches a rotation value (0).
Device in accordance with one of the claims 6-11, in which the calculation device (26) trained is to determine a level for a distance difference (ΔΘ) the sample variance for each of the outer image areas (22a-d) along a local direction parallel to the roll axis, and along a local direction (y) parallel to the axis of the Nick (x) and so to control the positioning device (24), these moves the multi aperture Optics (12) along the axis of greed , so that the spacing differences (AG) reach a nominal value (0).
Device in accordance with one of the preceding claims of the calculation device is trained to,.
a measure of a distance (G) of sample intervals of the actual position for an inner area (22e) based on the pattern of deviation to determine, which trained the calculation device (26) is to control the positioning device (24) so that it moves the multi aperture Optics (12) along the yaw axis, so that the spacing (G) is a distance value (AGasn) achieved;
to determine a level for a first lateral deviation (ÄR) of the actual position for the inner area (22e) based on the pattern deviation along a roll axis to determine a level for a second lateral deviation for the inner area of the picture based on the pattern deviation along the axis of a Nick and
to control the positioning device (24) so that the first Lateraidifferenz (AR) achieved a first lateral differential setpoint (0) and that the second lateral difference reaches a second lateral differential setpoint (0).
a measure of wedge error differences (AtN) of sample intervals of the actual position for four outer areas of the image (22a-d), which are oppositely arranged in two pairs of parallel to the roll axis and to determine and control that the multi aperture optics for the roll-axis or the Nick-axis is tilted, so that the wedge error differences reach a roll value or a Nick value (0) to the positioner (24) are arranged in two pairs of parallel to the axis of the Nick opposite;
a measure of a rotational difference (δ) the pattern deviation for the four outer area (22a-d) along one first local (x) and a second local (y) to determine lateral direction and control to the positioning unit (24), it rotates a yaw-axis multi aperture Optics (12) so that the rotational difference (δ) reach a rotation value (0); and
a measure of a distance difference (AG) the sample variance for each of the outer image areas (22a-d), along a local direction parallel to the roll (x) axis and along a local direction (y) parallel to the axis of the Nick to determine and control these moves the multi aperture Optics (12) along the yaw axis, so that the Abstandsdifferen-zen (AG) reach a setpoint (0) to the positioner (24).
14 device according to claim 13 of the calculation device (26) is trained to each control the positioning device (24), that the distance (G) a distance value (AGasn) achieved, that the first lateral difference reaches a first lateral differential setpoint (0) and that the second lateral difference reaches a a second lateral setpoint difference (0), before the calculating device
(26) the positioning device (24) controls so that the wedge error differences (AtN) reach the roll value (0) or the Nick (0) value, the rotation differences reach the rotation value (0) or the spacing differences (AG) reach the setpoint value (0).
15 procedure for the relative positioning of a multi aperture Optics (12) with multiple optical channels (14a-f) relative to an image sensor (16), with following steps:
Arrange (12a) a reference object (18) so that the reference object (18) is represented with the multi aperture Optics (12) in the optical channels (14a-f) to an image (22a-e), per channel (14a-f);
Provide a positioning device (24), which can be controlled to change a relative location between the multi aperture Optics (12) and (16) image sensor;
Determining of is positions (914, 954, 814, 854; 1006, 1 106) of the reference object (18) in images of the reference object in at least three areas of the image (22a-d);
Comparisons of is positions (824, 864, 926, 966, 1016; 1 1 16) with positions; and
Taxes a positioner (822, 862; 922, 962; 1014; 1 1 14) based on the comparison.
| # | Name | Date |
|---|---|---|
| 1 | 201637043616-Correspondence to notify the Controller [14-01-2023(online)].pdf | 2023-01-14 |
| 1 | Form 5 [21-12-2016(online)].pdf | 2016-12-21 |
| 2 | 201637043616-US(14)-HearingNotice-(HearingDate-13-02-2023).pdf | 2023-01-13 |
| 2 | Form 3 [21-12-2016(online)].pdf | 2016-12-21 |
| 3 | Form 20 [21-12-2016(online)].pdf | 2016-12-21 |
| 3 | 201637043616-Information under section 8(2) [21-06-2021(online)].pdf | 2021-06-21 |
| 4 | Drawing [21-12-2016(online)].pdf | 2016-12-21 |
| 4 | 201637043616-FORM 3 [22-04-2021(online)].pdf | 2021-04-22 |
| 5 | Description(Complete) [21-12-2016(online)].pdf_57.pdf | 2016-12-21 |
| 5 | 201637043616-Information under section 8(2) [22-04-2021(online)].pdf | 2021-04-22 |
| 6 | Description(Complete) [21-12-2016(online)].pdf | 2016-12-21 |
| 6 | 201637043616-Information under section 8(2) [08-01-2021(online)].pdf | 2021-01-08 |
| 7 | Form 18 [03-01-2017(online)].pdf | 2017-01-03 |
| 7 | 201637043616-ABSTRACT [18-11-2020(online)].pdf | 2020-11-18 |
| 8 | Other Patent Document [03-05-2017(online)].pdf | 2017-05-03 |
| 8 | 201637043616-CLAIMS [18-11-2020(online)].pdf | 2020-11-18 |
| 9 | 201637043616-FER_SER_REPLY [18-11-2020(online)].pdf | 2020-11-18 |
| 9 | Other Patent Document [05-05-2017(online)].pdf | 2017-05-05 |
| 10 | 201637043616-Information under section 8(2) (MANDATORY) [24-11-2017(online)].pdf | 2017-11-24 |
| 10 | 201637043616-OTHERS [18-11-2020(online)].pdf | 2020-11-18 |
| 11 | 201637043616-FORM 3 [14-10-2020(online)].pdf | 2020-10-14 |
| 11 | 201637043616-Information under section 8(2) (MANDATORY) [26-04-2018(online)].pdf | 2018-04-26 |
| 12 | 201637043616-Information under section 8(2) (MANDATORY) [12-05-2018(online)].pdf | 2018-05-12 |
| 12 | 201637043616-Information under section 8(2) [14-10-2020(online)].pdf | 2020-10-14 |
| 13 | 201637043616-FORM 4(ii) [13-08-2020(online)].pdf | 2020-08-13 |
| 13 | 201637043616-Information under section 8(2) (MANDATORY) [04-07-2018(online)].pdf | 2018-07-04 |
| 14 | 201637043616-Information under section 8(2) (MANDATORY) [19-11-2018(online)].pdf | 2018-11-19 |
| 14 | 201637043616-Information under section 8(2) [10-07-2020(online)].pdf | 2020-07-10 |
| 15 | 201637043616-Information under section 8(2) (MANDATORY) [14-12-2018(online)].pdf | 2018-12-14 |
| 15 | 201637043616-Verified English translation [04-06-2020(online)].pdf | 2020-06-04 |
| 16 | 201637043616-Information under section 8(2) (MANDATORY) [13-05-2019(online)].pdf | 2019-05-13 |
| 16 | 201637043616-Information under section 8(2) [01-05-2020(online)].pdf | 2020-05-01 |
| 17 | 201637043616-Information under section 8(2) (MANDATORY) [06-11-2019(online)].pdf | 2019-11-06 |
| 17 | 201637043616-FER.pdf | 2020-02-24 |
| 18 | 201637043616-FER.pdf | 2020-02-24 |
| 18 | 201637043616-Information under section 8(2) (MANDATORY) [06-11-2019(online)].pdf | 2019-11-06 |
| 19 | 201637043616-Information under section 8(2) (MANDATORY) [13-05-2019(online)].pdf | 2019-05-13 |
| 19 | 201637043616-Information under section 8(2) [01-05-2020(online)].pdf | 2020-05-01 |
| 20 | 201637043616-Information under section 8(2) (MANDATORY) [14-12-2018(online)].pdf | 2018-12-14 |
| 20 | 201637043616-Verified English translation [04-06-2020(online)].pdf | 2020-06-04 |
| 21 | 201637043616-Information under section 8(2) (MANDATORY) [19-11-2018(online)].pdf | 2018-11-19 |
| 21 | 201637043616-Information under section 8(2) [10-07-2020(online)].pdf | 2020-07-10 |
| 22 | 201637043616-FORM 4(ii) [13-08-2020(online)].pdf | 2020-08-13 |
| 22 | 201637043616-Information under section 8(2) (MANDATORY) [04-07-2018(online)].pdf | 2018-07-04 |
| 23 | 201637043616-Information under section 8(2) (MANDATORY) [12-05-2018(online)].pdf | 2018-05-12 |
| 23 | 201637043616-Information under section 8(2) [14-10-2020(online)].pdf | 2020-10-14 |
| 24 | 201637043616-Information under section 8(2) (MANDATORY) [26-04-2018(online)].pdf | 2018-04-26 |
| 24 | 201637043616-FORM 3 [14-10-2020(online)].pdf | 2020-10-14 |
| 25 | 201637043616-Information under section 8(2) (MANDATORY) [24-11-2017(online)].pdf | 2017-11-24 |
| 25 | 201637043616-OTHERS [18-11-2020(online)].pdf | 2020-11-18 |
| 26 | 201637043616-FER_SER_REPLY [18-11-2020(online)].pdf | 2020-11-18 |
| 26 | Other Patent Document [05-05-2017(online)].pdf | 2017-05-05 |
| 27 | 201637043616-CLAIMS [18-11-2020(online)].pdf | 2020-11-18 |
| 27 | Other Patent Document [03-05-2017(online)].pdf | 2017-05-03 |
| 28 | 201637043616-ABSTRACT [18-11-2020(online)].pdf | 2020-11-18 |
| 28 | Form 18 [03-01-2017(online)].pdf | 2017-01-03 |
| 29 | 201637043616-Information under section 8(2) [08-01-2021(online)].pdf | 2021-01-08 |
| 29 | Description(Complete) [21-12-2016(online)].pdf | 2016-12-21 |
| 30 | 201637043616-Information under section 8(2) [22-04-2021(online)].pdf | 2021-04-22 |
| 30 | Description(Complete) [21-12-2016(online)].pdf_57.pdf | 2016-12-21 |
| 31 | Drawing [21-12-2016(online)].pdf | 2016-12-21 |
| 31 | 201637043616-FORM 3 [22-04-2021(online)].pdf | 2021-04-22 |
| 32 | Form 20 [21-12-2016(online)].pdf | 2016-12-21 |
| 32 | 201637043616-Information under section 8(2) [21-06-2021(online)].pdf | 2021-06-21 |
| 33 | Form 3 [21-12-2016(online)].pdf | 2016-12-21 |
| 33 | 201637043616-US(14)-HearingNotice-(HearingDate-13-02-2023).pdf | 2023-01-13 |
| 34 | Form 5 [21-12-2016(online)].pdf | 2016-12-21 |
| 34 | 201637043616-Correspondence to notify the Controller [14-01-2023(online)].pdf | 2023-01-14 |
| 1 | 2020-02-2117-13-15_21-02-2020.pdf |