Sign In to Follow Application
View All Documents & Correspondence

Device For Representing Partial Fields Of View Multi Aperture Representation Device And Method For Providing The Same

Abstract: The invention relates to a device comprising an image sensor and an array of optical channels wherein each optical channel comprises an optic for representing a partial field of view of a total field of view on an image sensor area of the image sensor. A first optical channel of the array is configured to represent a first partial field of view of the total field of view. A second optical channel of the array is configured to represent a second partial field of view of the total field of view. The device comprises a calculation unit configured to obtain image information of the first and second fields of view based on the represented partial fields of view to obtain image information of the total field of view and to combine the image information of the partial field of view with the image information of the total field of view in order to produce combined image information of the total field of view.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
11 October 2019
Publication Number
49/2019
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
lsdavar@vsnl.com
Parent Application

Applicants

FRAUNHOFER-GESELLSCHAFT ZUR FÖRDERUNG DER ANGEWANDTEN FORSCHUNG E.V
Hansastrasse 27 c 80686 München

Inventors

1. DUPARRÉ, Jacques
Buchaer Str. 12C 07745 Jena
2. OBERDÖRSTER, Alexander
Karl-Liebknecht-Straße 6 07749 Jena
3. WIPPERMANN, Frank
Berliner Str. 57 98617 Meiningen
4. BRÜCKNER, Andreas
Dornburger Str. 77 07743 Jena

Specification

Device for imaging partial fields of view, multi-aperture imaging device and method for providing the same description The present invention relates to an apparatus for multichannel detection of an overall visual field, to an extender for completing an existing camera, to a multi-aperture imaging apparatus, and to methods of manufacturing a device and multi-aperture imaging apparatus described herein. The present invention further relates to a symmetrical channel arrangement and different fields of view. Conventional cameras have an imaging channel that maps the entire object field. Other cameras include multiple imaging channels to map the total facial field through multiple partial fields of view. For correct stitching (stitching) of images for a total field of view with objects at different distances from the camera, it may be necessary to perform the calculation of a depth map of the recorded total field of view. If a stereoscopic acquisition is used for this purpose, it may be necessary to synthetically generate a perspective of the (artificial, medium) reference camera. This can lead to occultation or occlusion problems, as some objects may be obscured along one line of sight. It would therefore be desirable to have a concept for providing high-quality images that do not have the disadvantages mentioned above. The object of the present invention is therefore to provide high-quality images of the total field of view and at the same time to provide an influence on the image recording with little preprocessing effort. This object is solved by the subject matter of the independent patent claims. A finding of the present invention is to have recognized that the above object can be achieved in that image information, such as a resolution of a total field of view can be increased by combining them with image information of partial fields of the same total field of view, in this case the image information of the total visual field already as coarse information is present and can be used and by the use of the overall image information, the occurrence of occult artifacts can be avoided. According to an embodiment, an apparatus includes an image sensor and an array of optical channels. Each optical channel comprises an optical system for imaging a partial field of view of a total field of view onto an image sensor region of the image sensor. A first optical channel of the array is configured to image a first partial field of view of the total field of view, and a second optical channel of the array is configured to image a second partial field of view of the total field of view. The apparatus includes a computing unit configured to obtain image information of the first and second partial visual fields based on the mapped partial visual fields. The calculation unit is further configured to obtain image information of the total field of view, for example from a further device, and to combine the image information of the partial field of view with the image information of the total visual field to produce combined image information of the total visual field. By combining the image information of the partial fields of view and the total field of view, a high-quality combined image information is obtained since there is a high degree of image information. Furthermore, the image information of the overall visual field allows an influence with little preprocessing effort, since it can be displayed to a user without having to provide partial images. By combining the image information of the partial fields of view and the total field of view, a high quality combined image information is obtained since a high degree of image information is present. Furthermore, the image information of the overall visual field allows an influence with little preprocessing effort, since it can be displayed to a user without having to provide partial images. By combining the image information of the partial fields of view and the total field of view, a high-quality combined image information is obtained since there is a high degree of image information. Furthermore, the image information of the overall visual field allows an influence with little preprocessing effort, since it can be displayed to a user without having to provide partial images. According to a further embodiment, a supplementary device comprises such a device and is configured to be coupled to a camera in order to obtain therefrom the image information of the total field of view. This makes it possible to supplement existing possibly mono cameras by the additional imaging of the partial fields of view, so that a high-quality combined image information of the total field of view is obtained. At the same time, the image of the camera can be used to exert influence on the image processing, since the information regarding the overall visual field is already at least roughly present. According to a further embodiment, a multi-aperture imaging device comprises an image sensor, an array of optical channels, each optical channel comprising optics for imaging at least one partial field of view of an overall visual field on an image sensor region of the image sensor. A first optical channel of the array is configured to image a first partial field of view of the total field of view, wherein a second optical channel of the array is adapted to image a second partial field of view of the total field of view, and wherein a third optical channel is formed to fully image the total field of view. This makes it possible to obtain both image information relating to the total facial field and additionally to obtain image information regarding the partial field of view of the same total field of view, so that image areas of the partial fields are scanned multiple times, enabling, for example, a stereoscopic depth map and thus high quality image generation. At the same time, in addition to the information concerning the partial fields of view, the information regarding the total field of vision is also present, which allows the influence of the user without prior image processing. Further embodiments relate to a method for producing a device for multichannel detection of an overall visual field and to a method for providing an ultrapure imaging device. The mentioned exemplary embodiments make it possible to avoid or reduce the occultation, since the main visual direction of the image of the total visual field and of the combined image information of the total visual field is unchanged and is supplemented by the images of the partial visual fields. Further advantageous embodiments are the subject of the dependent claims. Preferred embodiments of the present invention will be explained below with reference to the accompanying drawings. Show it: 1 is a schematic perspective view of a multi-aperture imaging device according to an embodiment; 2a-c are schematic representations of arrangements of partial fields in a total field of view, according to an embodiment; a schematic perspective view of a Multiaperturabbüdungs ​​device having a computing unit, according to an embodiment; a schematic representation of image sensor areas as they may be arranged for example in the multi-aperture imaging device according to FIG. 1 or FIG. 3, according to an embodiment a schematic representation of a possible embodiment of the calculation unit, according to an embodiment; a schematic plan view of the multi-aperture imaging device of FIG. 3 and according to an embodiment which is designed to create a depth map; a schematic perspective view of a multi-aperture imaging device according to another embodiment, which comprises a Anzeigeeinrich device; a schematic perspective view of a multi-aperture imaging device according to an embodiment having an optical image stabilizer and an electronic image stabilizer; a schematic perspective view of a ultiaperture imaging device according to another embodiment, which comprises a focusing device; a schematic perspective view of a Multiaperturabbildungs- device according to another embodiment, in which the image sensor areas are arranged on at least two mutually different chips and are oriented against each other; 1 1 is a schematic perspective view of a multi-aperture imaging device according to a further embodiment, in which optics have different focal lengths; 12 is a schematic perspective view of a device according to another embodiment; 13 shows a schematic perspective view of a supplementary device according to an exemplary embodiment; 14 is a schematic flow diagram of a method for providing a Device according to an embodiment; and 15 is a schematic flow diagram of a method of providing an ulti-aperture imaging device according to one embodiment. Before below Ausführungsbeispieie the present invention are explained in detail with reference to the drawings, it is noted that identical, functionally same or equivalent elements, objects and / or structures are provided in the different figures with the same reference numerals, so that in different Described embodiments of these elements is interchangeable or can be applied to each other. 1 shows a schematic perspective view of an ultrapure imaging device 10 according to one embodiment. The multi-aperture imaging apparatus 10 includes an image sensor 12 having a plurality of image sensor areas 24a-c. The image sensor 12 may be configured so that the image sensor regions 24a-c are part of a common chip, but may alternatively have multiple components, that is, the image sensor regions 24a-c may be disposed on different chips. Alternatively or additionally, the image sensor regions 24a and 24c can have a different size of the sensor surface than the image sensor region 24b and / or the image sensor region 24a relative to the image sensor region 24b and / or have a different number and / or size of pixels. The multi-aperture imaging apparatus 10 further includes an array 14 of optical channels 16a-c. Each of the optical channels 16a-c comprises an optical system 64a-c for imaging at least one partial field of view of a total field of view or object area on an image sensor area 24a-c of the image sensor 12. One of the optics 64a-c is assigned to one of the image sensor areas 24a-c and designed to influence a beam path 26a-c, such as by bundling or scattering, so that the respective partial field of view or total field of view is imaged onto the image sensor area 24a-c. The optics 64a-c may be disposed on a common carrier to form the array 14, but may otherwise be mechanically interconnected or not mechanically in contact. Properties of the optical channels, such as length, Two of the optical channels 16a-c are formed to respectively image a partial field of view on the associated image sensor area 24a-c. The image of a partial field of view means that the total field of view is incompletely displayed. Another optical channel of the optical channels 16a-c is designed to fully image the total field of view. For example, the ultrapaper imaging device 10 is configured such that the optical channel 16b is formed to completely detect the total field of view. The optical channels 16a and 16c are designed, for example, to detect at most incompletely overlapping or disjointly arranged partial fields of view of the total field of view. That means, in that the arrangement of the optics 64a and 64c for detecting the first and second partial field in the array 14 relative to the optic 64b for capturing the total field of view may be symmetrical and / or that the arrangement of the image sensor areas 24a and 24c for imaging the first and second partial field of view relative to a location of the image sensor area 24b for imaging the total field of view may be symmetrical. Although any other associations between visual fields, optics and image sensor areas are possible, the symmetrical arrangement in particular offers the advantage that the additional detection of the partial field of view enables a symmetrical disparity with respect to the mean field of view, ie the detection of the total field of view. The ulti-aperture imaging device 10 may include an optional beam deflector 18, which in turn comprises beam deflecting regions 46a-c, the beam deflector 18 being configured to deflect a beam path 26a-c with each of the beam deflecting regions 46a-c. The beam deflector 18 may include a mirror surface having the beam deflecting regions 46a-c. Alternatively, at least two of the beam deflecting regions 46a-c may be inclined to each other and form a plurality of mirror surfaces. Alternatively or additionally, the beam deflection device 18 may have a plurality or a plurality of facets. The use of the beam deflector 18 may be advantageous when the field of view to be detected is in a direction different from the viewing direction between the image sensor 12 and the array 14 of the ulti-aperture imaging device 10. Alternatively, in the absence of the beam deflecting device 18, the total field of view along the line of sight of the multi-aperture imaging device 10 and the direction between the image sensor 12 and the array 14 and beyond, respectively, may be detected. However, an arrangement of the beam deflection device 18 may allow the viewing direction of the multi-aperture imaging device 10 to be changed by translational and / or rotational movement of the beam deflection device 18 without having to change the orientation of the image sensor 12 and / or the array 14 in space. Alternatively, in the absence of the beam deflecting device 18, the total field of view along the line of sight of the multi-aperture imaging device 10 and the direction between the image sensor 12 and the array 14 and beyond, respectively, may be detected. However, an arrangement of the beam deflection device 18 may allow the viewing direction of the multi-aperture imaging device 10 to be changed by translational and / or rotational movement of the beam deflection device 18 without having to change the orientation of the image sensor 12 and / or the array 14 in space. Alternatively, in the absence of the beam deflecting device 18, the total field of view along the line of sight of the multi-aperture imaging device 10 and the direction between the image sensor 12 and the array 14 and beyond, respectively, may be detected. However, an arrangement of the beam deflection device 18 may allow the viewing direction of the multi-aperture imaging device 10 to be changed by translational and / or rotational movement of the beam deflection device 18 without having to change the orientation of the image sensor 12 and / or the array 14 in space. FIG. 2 a shows a schematic illustration of an arrangement of partial facial expressors 72 a and 72 b in a total field of view 70 that can be detected, for example, by the dual-patient imaging device 10. For example, the total field of view 70 with the optical channel 16b can be imaged onto the image sensor area 24b. For example, the optical channel 16a may be configured to detect the partial field of view 72a and image it onto the image sensor area 24a. The optical channel 16c may be configured to detect the partial field of view 72b and to image it onto the image sensor area 24c. That is, a group of optical channels may be formed to detect exactly two partial fields of view 72a and 72b. Although shown with different extents for better distinctness, the partial fields 72a and 72b may have an equal or comparable extent along at least one image direction 28 or 32, such as along the image direction 32. The extent of the partial fields 72a and 72b may be identical to the extent of the total field of view 70 along the image direction 32. This means that the partial fields of view 72a and 72b can completely capture or record the total field of view 70 along the direction of orientation 32 and only partly capture or record the total field of view along another image direction 28 arranged perpendicularly thereto and be offset relative to one another, so that along combinatorial lines as well the second direction gives a complete coverage of the total field of view 70. Here, the partial fields of view 72a and 72b may be disjoint with each other or at most incompletely overlapped with each other in an overlap area 73 that may possibly extend completely along the image direction 32 in the total field of view 70. A group of optical channels including the optical channels 16a and 16b 16c may be configured to collectively map the total field of view 70 completely. The image direction 28 may be, for example, a horizontal of an image to be provided. Simplified, the image directions 28 and 32 represent two arbitrarily different image directions in space. 2b shows a schematic representation of an arrangement of the partial fields of view 72a and 72b, which are arranged along another image direction, the image direction 32, offset from one another and overlap one another. The partial fields of view 72a and 72b can incompletely capture the total field of view 70 along the image direction 28 in each case and incompletely along the image direction 32. The overlap area 73 is arranged, for example, completely in the total field of view 70 along the image direction 28. FIG. 2 c shows a schematic illustration of four partial fields of view 72 a to 72 d, which incompletely detect the total field of view 70 in both directions 28 and 32. Two adjacent partial face panels 72a and 72b overlap in an overlap area 73b. Two overlapping partial field of view 72b and 72c overlap in an overlap area 73c. Similarly, partial fields of view 72c and 72d overlap in an overlap region 73d, and partial field of view 72d overlaps with partial field of view 72a in an overlap region 73a. All four partial fields of view 72a to 72d may overlap in an overlapping area 73e of the total field of view 70. For capturing the total field of view 70 and the partial field of view 72a-d, a multi-aperture imaging device may be formed similar to that described in connection with FIG. 1, where the array 14 may comprise, for example, five optics, four for capturing partial fields 72a-d, and one for capturing the field Total field of view 70. In the overlapping areas 73a to 73e, a large number of image information is available. For example, the overlap area 73b is detected via the total field of view 70, the partial field of view 72a and the partial field of view 72b. An image format of the total field of view may correspond to a redundancy-free combination of the mapped partial field of view, for example the partial field of view 72a-d in FIG. 2c, wherein the overlapping regions 73a-e are only counted in each case. In connection with FIGS. 2a and 2b, this applies to the redundancy-free combination of the partial field of view 72a and 72b. An overlap in the overlapping regions 73 and / or 73a-e may, for example, comprise at most 50%, at most 35% or at most 20% of the respective partial images. FIG. 3 shows a schematic perspective illustration of a multi-aperture imaging device 30 according to a further exemplary embodiment, which extends the multi-aperture imaging device 10 by a calculation unit 33. The calculation unit 33 is configured to obtain image information from the image sensor 12, that is, image information relating to the partial visual fields displayed on the image sensor regions 24a and 24c, such as the partial visual fields 72a and 72b, as well as image information of the total visual field, such as the total visual field 70, on the Image sensor area 24b can be imaged. The calculation unit 33 is designed to combine the image information of the partial visual fields and the image information of the total visual field. For example, a combination of the image information may be such that the degree of sampling of the total field of view is less than one degree of sampling of the partial field of view. The degree of sampling can be understood to mean a local resolution of the partial or total field of view, that is to say a variable that indicates which surface in the object area is imaged onto which surface or pixel size of the image sensor. In embodiments described herein, the term resolution means the amount of partial or total facial vision imaged on a corresponding image sensor surface. A comparatively larger resolution thus means that a constant surface area of ​​a field of view is imaged with the same pixel size onto a larger image sensor area, and / or that a comparatively smaller object surface area is imaged with the same pixel size on a constant image sensor area. By combining the image information, a degree of sampling and / or a resolution of the combined image information 61 relative to the detection of the total field of view may be increased. which surface in the object area is imaged onto which surface or pixel size of the image sensor. In embodiments described herein, the term resolution means the amount of partial or total facial vision imaged on a corresponding image sensor surface. A comparatively larger resolution thus means that a constant surface area of ​​a field of view is imaged with the same pixel size onto a larger image sensor area, and / or that a comparatively smaller object surface area is imaged with the same pixel size on a constant image sensor area. By combining the image information, a degree of sampling and / or a resolution of the combined image information 61 relative to the detection of the total field of view may be increased. which surface in the object area is imaged onto which surface or pixel size of the image sensor. In embodiments described herein, the term resolution means the amount of partial or total facial vision imaged on a corresponding image sensor surface. A comparatively larger resolution thus means that a constant surface area of ​​a field of view is imaged with the same pixel size onto a larger image sensor area, and / or that a comparatively smaller object surface area is imaged with the same pixel size on a constant image sensor area. By combining the image information, a degree of sampling and / or a resolution of the combined image information 61 relative to the detection of the total field of view may be increased. FIG. 4 shows a schematic representation of image sensor regions 24a-c, as they may be arranged in the multi-aperture imaging device 10 or 30. For example, the partial field of view 72a is imaged onto the image sensor area 24a. The partial field of view 72b is mapped to the image sensor area 24c, for example. The total field of view 70 is mapped onto the image sensor area 24b, for example. The arrangement of the partial field of view 72a and 72b in space may correspond, for example, to the configuration according to FIG. 2b. The image sensor areas 24a, 24b and 24c may have along the image direction 32 an equal or within a tolerance range of 20%, 0% or 5% equal physical dimension b, which may correspond to a corresponding number of pixels. Along the image direction 28, the image sensor areas 24a and 24c may have a physical extent a that may correspond to a corresponding number of a pixels. The extent or pixels a may be greater along the image direction 28 than the extent or number of pixels c of the image sensor region 24b. Since the partial field of view 72a and 72b along the image direction 28 and compared to the total field of view 70 are the same size, a scan takes place with a higher resolution or a higher degree of scanning of the total field of view along the image direction 28, that is, a smaller area in the object area is mapped to a pixel of constant size, so that the resulting combinational resolution or degree of scanning is increased. A super-resolution effect may be implemented, such as when the pixels of the images of the sub-field of view have a sub-pixel offset from each other. Along the image direction 32, for example, a number of 2 × b pixels are used to image the total field of view 70 over the partial fields 72a and 72b, in which case the overlap region 73 is taken into account. Due to the disjoint or only partial overlap of the partial field of view 72a and 72b, however, an increased resolution also results along the image direction 32 compared with the detection of the total field of view 70 in the image sensor region 24. Thus, the combined image information of the overall visual field 61 can be increased by combining the images in the image sensor areas 24a to 24c as compared with the resolution obtained in the imaging area 24b. An aspect ratio of the image in the image sensor area 24b may have a value of 3: 4. This makes it possible to obtain the combinational image with an equal aspect ratio. A resolution in the image sensor areas 24a and / or 24c may be greater than or equal to at least 30%, at least 50%, or at least 100% along the respective direction and / or resulting image within a tolerance range of 20%, 10% in the image sensor area 24b, wherein an extent of the overlapping area is to be taken into account here. The image sensor regions 24a-c may be arranged along a direction of the direction of extension 35, which may, for example, be arranged parallel to the image direction 28 and / or along which the optics 64a-c of the multi-aperture ablation device 10 or 30 may be arranged. Along a direction z perpendicular thereto, which may be, for example, a thickness direction of the multi-aperture imaging device, the image sensor regions 24a-c may have an equal expansion within the tolerance range, that is, the resolution increase of the detection of the total visual field may be obtained while avoiding an additional thickness of the multi-aperture imaging device become. In other words, a linear symmetrical arrangement of at least three Ka-mera channels, ie optical channels, can be implemented, one of the optical channels, preferably the central optical channel, covering the entire field of view and the (two) outer channels only one each Part of the field of view, such as top / bottom or left / right, so that they cover together the entire field of view and also may have a slight overlap in the center of the field of view. This means that high-resolution partial images can be obtained left / right or up / down. In the center, a low-resolution image covering the entire relevant field of view is captured. The resolution in the middle image can be reduced as far as the correspondingly shorter focal length for the same image height, ie without considering the aspect ratio, and the same pixel size pretends or allows it. In other words, the image sensor height 24a, 24b and 24c are the same. Without overlap, therefore, the image height in the image sensor area 24b is about half as large as a combined image height of 24a and 24c. Therefore, in order to image the same field of view, the focal length (or magnification) of the optics for the image sensor region 24b (optical channel 16b) may be half as long as that for 24a and 24c. For the same pixel size, this means half the resolution (or field of view) in 24b compared to 24a and c combined. Corresponding image widths simply follow from the desired aspect ratio of the images. the image sensor height 24a, 24b and 24c is the same. Without overlap, therefore, the image height in the image sensor area 24b is about half as large as a combined image height of 24a and 24c. Therefore, in order to image the same field of view, the focal length (or magnification) of the optics for the image sensor region 24b (optical channel 16b) may be half as long as that for 24a and 24c. For the same pixel size, this means half the resolution (or field of view) in 24b compared to 24a and c combined. Corresponding image widths simply follow from the desired aspect ratio of the images. the image sensor height 24a, 24b and 24c is the same. Without overlap, therefore, the image height in the image sensor area 24b is about half as large as a combined image height of 24a and 24c. Therefore, in order to image the same field of view, the focal length (or magnification) of the optics for the image sensor region 24b (optical channel 16b) may be half as long as that for 24a and 24c. For the same pixel size, this means half the resolution (or field of view) in 24b compared to 24a and c combined. Corresponding image widths simply follow from the desired aspect ratio of the images. Therefore, the focal length (or magnification) of the optics for the image sensor area 24b (optical channel 16b) may be half as long as that for 24a and 24c. For the same pixel size, this means half the resolution (or field of view) in 24b compared to 24a and c combined. Corresponding image widths simply follow from the desired aspect ratio of the images. Therefore, the focal length (or magnification) of the optics for the image sensor area 24b (optical channel 16b) may be half as long as that for 24a and 24c. For the same pixel size, this means half the resolution (or field of view) in 24b compared to 24a and c combined. Corresponding image widths simply follow from the desired aspect ratio of the images. The middle camera channel is the reference camera for depth map generation, if created by the calculation unit. This arrangement including the symmetry with respect to the central channel allows a high quality of the obtained combined overall image with respect to the occultations or concealments in the depth map. The middle image is therefore usefully also the reference for the calculation of the higher-resolution combined image. In the low-resolution reference, the at least two higher-resolution images are used in blocks. They serve as a material that can be used when accuracy of fit is ensured, meaning that matching features are found in the partial fields of view and in the entire visual field. The insertion can be done in very small blocks, so that problems with parallax can be avoided even with fine objects with large depths. The matching blocks are searched, for example, by correspondence denz, which may mean that a disparity map is generated, ie a depth map. However, if a high-resolution block is not found with sufficient certainty for a low-resolution block, this has no catastrophic effects. It will simply leave the low-resolution source image. In other words, holes in the depth map only result in blurred areas in the overall image rather than clearly visible artifacts. that a disparity map is generated, ie a depth map. However, if a high-resolution block is not found with sufficient certainty for a low-resolution block, this has no catastrophic effects. It will simply leave the low-resolution source image. In other words, holes in the depth map only result in blurred areas in the overall image rather than clearly visible artifacts. that a disparity map is generated, ie a depth map. However, if a high-resolution block is not found with sufficient certainty for a low-resolution block, this has no catastrophic effects. It will simply leave the low-resolution source image. In other words, holes in the depth map only result in blurred areas in the overall image rather than clearly visible artifacts. In other words, the shorter focal length center image sensor can capture a lower resolution image covering the entire field of view (FOV) and inherently, ie, natively, having the desired aspect ratio. This camera may also be referred to as a reference camera, for example because the resulting combined image has its perspective. There is an assembly of higher-resolution partial images, which can partially overlap and taken together have the same aspect ratio as the reference camera. The combination of the images of the partial field of view with the reference camera allows a very correct Stit-ching in the overlap area, since there is a lower resolution, but from the house correct image. The height of all three image sensors is according to an advantageous embodiment equal to or close to equality to make optimum use of available heights. All cameras can be redirected via a common mirror (beam deflector). If necessary, a depth map can be calculated as follows. In the overlap area 73 of the two high-resolution fields by the same and the low-resolution overall image; in the remaining areas, each time by combining one of the high-resolution partial images with the corresponding section of the low-resolution overall image. In the overlap area 73 of the two high-resolution fields by the same and the low-resolution overall image; in the remaining areas, each time by combining one of the high-resolution partial images with the corresponding section of the low-resolution overall image. In the overlap area 73 of the two high-resolution fields by the same and the low-resolution overall image; in the remaining areas, each time by combining one of the high-resolution partial images with the corresponding section of the low-resolution overall image. FIG. 5 shows a schematic representation of a possible embodiment of the calculation unit 33. The calculation unit 33 can be designed to store the image information of the total field of view 70 and the image information of the partial field of view 72a and 72b into image blocks 63a of the partial field of view 72a, 63b and the partial field of view 72b and 63c of FIG To divide the total field of view 70. An image block may have a certain number of pixels along both image directions 28 and 32. The blocks may, for example, have a size along the image direction 28 and 32 of at least 2 and at most 1000 pixels, of at least 10 and at most 500 pixels, or of at least 20 and at most 100 pixels. The calculation unit may be configured to block by block a picture information contained in an image block of the total visual field of a matching image information of an image block of the first or second partial field of view 72a or 72b, to a resolution of the image information of the total field of view in the combined image information by combining the first and second image block increase. Each of the first and second image blocks may be a matching image block of an image of different partial fields of view in an overlap region thereof. Alternatively or additionally, the first or second block may be one block of the overall picture and the other block one block of the field. The calculation unit 33 is designed, for example, to be the one in the block 63 a 3identify the object represented by x as coincident with the object x in the block 63 ci of the total field of view 70. Based on the higher resolution of the partial field of view 72a compared to the total field of view 70, the computing unit may combine the image information of both blocks 63a 3 and 63ci to obtain the resulting resolution in the block higher than it was originally in the total field of view. The resulting combinational resolution may be equal to or even higher than the value of the resolution of the detection of the partial field of view 72a. The object represented by # in a block 63c 2 of the total face object 70 is output by the calculation unit, for example, in a block 63a 2 of the partial field of view 72a and in a block 63bi of the partial field of view 72b, so that image information from both images of the partial fields 72a and 72b can be used to improve the image quality. A in a block 63c 3 with * represented object of the computing unit, for example, in a block 63b 2 identifies the sub-field of view 72b, so that the image information of the block 63b 2 are used by the computing unit, for example, to the image information in the block 63c 3 to increase , In a case where no block of the partial field of view 72a and 72b can be assigned to one block of the total visual field, as shown for the block 63c, for example, the calculation unit may be configured to output one block of the combined total image, at least the block 63c 4 is arranged in the overall picture. That is, image information can be displayed even if there is no increase in resolution locally. This results in at most slight changes in the overall image, that is, at the location of the block 63c 4 there is, for example, a locally reduced resolution. The calculation unit 33 may be configured to stitch the image information of the sub-field fields 72a and 72b based on the image information of the overall visual field 70. This means that the overall image of the total field of view can be used to at least support or even execute an alignment of the partial images of the partial visual fields 72a and 72b. Alternatively or additionally, the information from the overall image of the total field of view can be used to support or even execute the arrangement of the objects of the scene from the partial images and / or in a partial image within the overall image. The total field of view has a high number or even all objects which are also shown in the partial fields 72a and 72b, In other words, the low-resolution image always provides a basis for prioritizing the stitching of the high-resolution images, that is, an orientation basis, since there are already objects in the overall image. Stitching can mean, in addition to simply joining two global sub-image areas, that objects are rearranged into the scene or background depending on their spacing in a stitched image (with respect to their lateral position in the image), which depends on the distance of the objects may be required or desired in the scene. The concepts described herein greatly simplify the stitching process, even though a depth map should be necessary for accurate stitching. Occultation problems due to a missing camera in the center position can be avoided since at least three optical channels allow at least three viewing directions towards the total field of view. An occlusion in one viewpoint can thus be reduced or prevented by one or two different viewing angles. FIG. 6 shows a schematic plan view of the multi-aperture imaging device 30 according to one exemplary embodiment. The calculation unit 33 may be configured to create a depth map 81. The depth map 81 may refer to the image information of the total field of view 70. The calculation unit 33 is, for example is configured to exploit disparities 83a between the images of the partial field of view 72a and the total field of view 70 and 73b and between the images of the partial field of view 72b and the total field of view 70 to form a depth map. That is, the physical spacing of the optics 64a, 64b, and 64c and the image sensor areas 24a, 24b, and 24c provide different viewing angles and perspectives, respectively, that are used by the computing unit 33 to create the depth map 81. The calculation unit 33 may be configured to create the depth map 81 using the image information of the sub-field fields 72a and 72b in the overlap area 73 where the sub-field fields 72a and 72b overlap. This allows the use of a greater disparity compared to the individual disparities 83a and 83b, simplifies the sum of the individual disparities, as well as the use of high-resolution (partial) images. That is, in one embodiment, the computing unit 33 may be configured to create the depth map 81 in the overlap area 73 without the information of the overall visual field 70 being imaged on the image sensor area 24b. Alternatively, use of the information of the total field of view 70 in the overlapping area 73 is possible and advantageous, for example, for a high information density. to make the depth map 81 in the overlap area 73 without the information of the total field of view 70 imaged on the image sensor area 24b. Alternatively, use of the information of the total field of view 70 in the overlapping area 73 is possible and advantageous, for example, for a high information density. to make the depth map 81 in the overlap area 73 without the information of the total field of view 70 imaged on the image sensor area 24b. Alternatively, use of the information of the total field of view 70 in the overlapping area 73 is possible and advantageous, for example, for a high information density. The images in the biosensor regions 24a and 24c can, according to an advantageous development, be made without the use of (eg RGB) Bayer color filter arrangements as in FIG. 24b or at least using uniform color filters, so that the mapped first and second partial fields of vision have a color uniform luminance information from the multi-aperture imaging device. For example, a color uniform infrared filter, ultraviolet filter, red filter, blue filter, or the like, or even no filter may be disposed while a multicolor filter such as a Bayer array as shown in Fig. 24b is not arranged. In other words, since the outer channels only contribute detail to increase the quality of the overall visual field image, it may be advantageous when the outer channels 16a and 16c have no color filters. Although the outer channels contribute only luminance information, ie higher general sharpness / detail and no better color information, the advantage that is obtained is the higher sensitivity and thus lower noise, which in turn ultimately results in better resolution or sharpness, since the image is less smooth because, for example, no Bayer color filter pattern is above the pixels, but the resolution in the pure luminance channels is inherently higher (ideally almost twice as high), since no de-bayering is necessary anymore. Effectively, a color pixel can be about twice as large as So higher general sharpness / detail and no better color information, the advantage that is obtained with it, lies in the higher sensitivity and thus in a lower noise, which in turn ultimately allows a better resolution or sharpness, since the image is less smooth Since, for example, no Bayer color filter pattern is above the pixels, the resolution in the pure luminance channels is inherently higher (ideally almost twice as high) because de-bayering is no longer necessary. Effectively, a color pixel can be about twice as large as So higher general sharpness / detail and no better color information, the advantage that is obtained with it, lies in the higher sensitivity and thus in a lower noise, which in turn ultimately allows a better resolution or sharpness, since the image is less smooth Since, for example, no Bayer color filter pattern is above the pixels, the resolution in the pure luminance channels is inherently higher (ideally almost twice as high) because de-bayering is no longer necessary. Effectively, a color pixel can be about twice as large as since the image is less smooth, since, for example, no Bayer color filter pattern is above the pixels, but the resolution in the pure luminance channels is already higher (ideally almost twice as high), as no de-bayering more is necessary. Effectively, a color pixel can be about twice as large as since the image is less smooth, since, for example, no Bayer color filter pattern is above the pixels, but the resolution in the pure luminance channels is already higher (ideally almost twice as high), as no de-bayering more is necessary. Effectively, a color pixel can be about twice as large as a black and white pixel. Physically possibly not, because here black and white pixels can be used not only for resolution, but also for color discrimination by superposition of black and white pixels with the typical RGBG filter pattern. FIG. 7 shows a schematic perspective view of a multi-aperture imaging apparatus 71 according to a further embodiment comprising a display device 85. The multi-aperture imaging device 71 is configured to reproduce the representation of the total facial field 70 imaged on the biological sensor region 24b with the display device 85. For this purpose, for example, the calculation unit 33 may be formed in order to forward the corresponding signal from the image sensor 12 to the display device 85. Alternatively, the display device 85 may also be coupled directly to the image sensor 12 and receive the corresponding signal from the image sensor 12. The display device 85 is configured to receive and output the image information of the overall visual field having at most the resolution provided by the image sensor region 24b. Preferably, the resolution of the total field of view depicted on the sensor area 24b is passed on unchanged to the display device 85. This allows a display of the possibly currently captured image or video, such as a preview for a user, so that it can affect the recording. The higher resolution images provided by the combined image information 61 may be provided to the display device 85 at a different time, provided to another display device, stored or transmitted. It is also possible to obtain the combined biidine formation 61 intermittently, ie only when needed, and otherwise to use the possibly lower-resolution image of the total field of view 70, if sufficient for current use, such as viewing in the display device 85, without the need for a depth map or omitting to zoom in on details. This allows influencing the image acquisition without computationally and time-consuming combination of the image signals, which has an advantageous effect on the delay in the display device 85 and the energy requirement for the calculations. For example, the multi-aperture imaging device 71 may also be another multi-aperture imaging device described herein, such as the multi-aperture imaging device 10, 30, or 60, as a mobile phone, a smartphone, a tablet, or a monitor. The multi-aperture imaging device 71 can provide a real-time preview on the display 85, so the two outer camera channels do not always have to be activated, so that power can be saved and / or no additional computational effort for linking the fields is required, resulting in a reduced process -ausage and reduced energy consumption, which also allows a longer battery life. Alternatively or additionally, raw data can be stored for the time being and a high-resolution image can only be generated with a transmission to another computing unit such as a PC and / or when viewed on the display device with zoom in the detail. Here it is possible If necessary, only create the combinatorial image for relevant image regions or, at least in regions, not create the combinatorial image for irrelevant image regions. Relevant areas may, for example, be image areas for which an enlarged representation (zoom) is desired. Thus, both for single images and for a video, the image of the image sensor area 24b is directly usable and may have sufficient resolution for a video. It is also conceivable that the middle camera be provided in the same resolution for common video formats, ie about 1080p or 4K, so that otherwise usual resampling (sample rate conversion), binning (summarizing adjacent pixels) or skipping (skipping pixels) avoided can be, the resolution can be so high that high-resolution still images can be generated. FIG. 8 shows a schematic perspective view of a multi-aperture imaging apparatus 80 having an optical image stabilizer 22 and an electronic image stabilizer 41. The image stabilization aspects described below can be implemented without restrictions with the functionalities of the calculation unit 33, individually or in combination with one another. The optical image stabilizer 22 comprises, for example, actuators 36a, 36b and 42, wherein the actuators 36a and 36b are designed to achieve the optical image stabilization of the images of the partial fields in the image sensor areas 24a to 24c by a displacement of the array 14 along the line extension direction 35. Furthermore, the optical image stabilizer 22 is, for example, designed to obtain an optical image stabilization along the image axis 32 by a rotational movement 38 of the beam deflection device 18. For example. have the optics 64a and 64b of the array 14 is a fi within a tolerance range of at most 10%, at most 5% or at most 3% of mutually different effective focal length and f 3to capture the partial fields of vision in approximately the same way. The optic 64b may have a focal length f 2 which is at least 10% different therefrom. The channel-global rotational movement 38 leads, in conjunction with different focal lengths f 2 and f, or within the focal length differences between f, and f 3, to a different displacement 6 1 to 69 3the images in the image sensor areas 24a-c. That is, the optical bi-stabilizer 22 achieves different effects in the images by the circular channel rotation 38, so that at least one, several or all of the images deviate from a theoretical error-free state. The optical bi-stabilizer 22 may be configured to globally minimize the aberrations of all images, which, however, may result in errors in each of the images. Alternatively, the optical image stabilizer 22 may be configured to select a reference image in one of the image sensor areas 24a-d and to perform the control of the actuator 42 so that the image in the reference image or reference channel is as accurate as possible, which may also be termed error-free , That means,3 deviate from this reference image. In other words, a channel is corrected with the mechanical realized optical image stabilizer which works for all channels but does not keep all channels stable. These other channels are additionally corrected with the electronic image stabilizer. The optical image stabilizer may be configured to provide the relative movements for the optical channels channel-individually and / or individually for groups of optical channels, such as the group of optical channels 16a and 16c for detecting the partial fields and for the group comprising the optical channel 16b Acquisition of the total field of vision. The electronic image stabilizer 41 may be configured to perform channel-specific electronic image stabilization in each channel according to a predetermined functional relationship, which depends on the relative movements between the image sensor 12, the array 14, and the beam deflector 18. The electronic image stabilizer 41 may be configured to stabilize each image individually and individually. The electronic image stabilizer 41 can use global values ​​for this, such as the camera movement or the like, in order to increase the optical quality of the images. It is particularly advantageous if the electronic image stabilizer 41 is designed to perform electronic image correction on the basis of a reference image of the optical image stabilizer 22. Aberration = f (fj, relative motion), that is, the aberration global or relative to the reference channel is representable as a function of the focal length or focal length differences and the relative movement performed to change the line of sight or optical image stabilization. The electronic image stabilizer 41 can measure the amount of relative movement between the image sensor 12, the array 14 and the beam deflector 18 with the focal lengths f 1 to f 3or focal length differences related to the reference channel to obtain reliable information about the electronic image stabilization to be performed, and to establish and / or exploit the functional relationship. The required data of the optical properties and / or the functional relationship can be obtained during a calibration. Aligning images with each other to determine a shift of one image from another image may also be accomplished by determining a matching feature in the images of the subform fields, such as edge trajectories, object sizes, or the like. This can be identified, for example, by the electronic image stabilizer 41, which may further be configured to provide electronic image stabilization based on a comparison of motions of the feature in the first and second images. The kanaiindividualie electronic image stabilization can thus be done by a channel-specific image analysis of the movement of image details. As an alternative or in addition to a comparison in different images, a comparison of the feature within the same image can also take place, in particular to two images or frames spaced at a given time. The optical image stabilizer 41 may be configured to identify a matching feature in the corresponding field at a first time and at a second time, and to provide electronic image stabilization based on a comparison of motions of the feature in the first image. The comparison may, for example, indicate a shift by which the feature has been displaced by a relative movement and about which the image is to be pushed back in order to at least partially correct the aberration. The optical image stabilizer can be used to stabilize an image of the imaged partial field of view of a reference channel, such as the image in the image sensor area 24a. This means that the reference channel can be fully optically stabilized. According to embodiments, a plurality of optical image stabilizers can be arranged which provide optical image stabilization for at least groups of optical channels, such as the optical channel or first focal length optical channels, such as the total facial optical channel and optical channels second focal length, for example for imaging the partial view fields. Alternatively, a channel-specific optical image stabilization can be provided. The electronic image stabilizer 41 is formed, for example, to perform image stabilization on a channel-by-channel basis for optical channels other than the reference channel, which map onto the photosensor areas 24b and 24c. The multi-aperture ablation device may be configured to optically stabilize the reference channel only. That is, in one embodiment, sufficiently good image stabilization in the reference channel can be achieved by utilizing only the mechanically obtained optical image stabilization. In addition, electronic image stabilization is carried out for the other channels in order to partially or completely compensate for the previously described effect of insufficient optical image stabilization as a result of focal length differences, wherein the electronic stabilization takes place individually in each channel. According to another embodiment, it is further possible for each channel of the multi-aperture imaging device to have individual electronic image stabilization. The electronic image stabilization performed individually for each channel of the multi-aperture imaging device, ie, to a certain extent, can take place in such a way that a defined functional relationship between the image shifts to be implemented in the individual channels is utilized. For example, the displacement along the direction 32 in a channel is 1, 1, 1, 007, 1, 3, or 2 or 5 times the displacement along the direction 32 in another image. Depending on the image sensor, this may be linear or may correspond to an angular function, which images a rotation angle of the beam deflection device to a level of the electronic image stabilization along the image direction. An identical relationship can be obtained with the same or different numerical values ​​for the direction 28. For all embodiments, the realized relative movements of corresponding additional sensors, such as gyroscopes, etc., are detected or can be derived from the recorded image data of one, several or all channels. This data or information can be used for the optical and / or electronic image stabilizer, that is to say that the multi-aperture imaging device is designed, for example, to receive a sensor signal from a sensor, and for the sensor signal with respect to information that correlates with relative movement between the sensor Multi-aperture imaging device and the object is correlated, evaluated, and to carry out a control of the optical and / or electronic image stabilizer using this information. The optical image stabilizer may be configured to provide optical image stabilization along image axes 28 and 32 by movement of different components, such as array 28 for stabilization along direction 28 and rotation 38 of beam deflector 18 for directional stabilization 32. In both cases, differences in optics 64a-c affect. The foregoing explanations regarding electronic image stabilization may be implemented for both relative motions. In particular, viewing the directions 28 and 32 separately allows for consideration of different deviations between the optics 64a-c along the directions 28 and 32. Embodiments described herein may utilize a common image axis 28 and / or 32 for the sub-images in the image sensor regions 24a-c. Alternatively, the directions may also differ and be converted into each other. FIG. 9 shows a schematic perspective view of a multi-aperture imaging device 90 according to a further embodiment, which comprises a focusing device 87. The focusing device 87 may include one or more actuators 89a, 89b and / or 89c, which are formed by a distance between the array 14 and the image sensor 12 and / or the beam deflecting device 18 and the array 14 and / or the beam deflector 18 and the image sensor 12 to adjust focusing of the images on the image sensor areas 24a, 24b and / or 24c. Although the optics 64a, 64b, and 64c are illustrated as being disposed on a common carrier to be movable together, at least the optics 64b, the image sensor regions 24b, and / or the beam deflecting region 46b may be individually moved to provide focusing to set the optical channel 6b different from focusing in other channels. That is, the focusing device 87 may be configured to set a relative movement for the first and second optical channels 16a and 16c and a relative movement for the optical channel 16b different from each other. The focusing device 87 can be combined with the optical image stabilizer 22, that is, a movement that is provided by actuators both in the optical image stabilizer 22 and in the focusing device 87 can be provided by additionally arranged actuators or by a common actuator which provides motion between components for both focusing and optical image stabilization. In other words, use of separate actuators for autofocus (AF) and optionally optical image stabilization (OIS) is advantageous. Due to the possibly unequal construction of the adjacent channels in terms of resolution and focal length, a channel-specific actuator can allow to obtain a channel-individual adjustment, so that the benefits of autofocusing and / or image stabilization in all channels are obtained .. Thus, for example the function of the autofocus at different focal lengths requires different bilateral distances for focusing to perform this with high quality. Alternative designs may be made such that the optical channel configured to capture the total field of view may be 10 shows a schematic perspective illustration of a multi-aperture imaging device 100 according to a further exemplary embodiment, in which the image sensor regions 24a to 24c are arranged on at least two chips which are different from one another and are oriented relative to one another, ie are inclined. The image sensor areas 24b, in combination with the optics 64b, may have a first viewing direction, possibly directly to the total field of view 70. The image sensor areas 24a and 24c can be combined in nation with their associated optics 64a and 64c have a different viewing direction, for example, perpendicular thereto along an x-direction, the beam paths 26a and 26c are deflected by the beam deflecting device 18 to the partial fields 72a and 72b. This is an alternative form of construction to the previously described multi-aperture imaging devices. The use of the beam deflector 18 may result in a certain mirror size or baffle size which may be greater for the channel 24b than for the adjacent partial field detection channels, since the channel 16b has to capture the larger total field of view compared to the partial fields 72a and 72b , This can lead to an increase along a thickness direction of the device, for example a z-direction, which is undesirable in some embodiments. The use of the beam deflection 18 can therefore be redesigned so that only the beam paths 26a and 26c are deflected, while the beam path 26b is directed directly, ie without Umienkung, to the total field of view 70. In other words, the middle camera channel without deflecting mirror is thus installed in the classical orientation directly out of the plane of the device, for example a telephone, in the middle between the two deflected camera channels of higher resolution. Due to the lower resolution, for example, a value of 0.77 or 1/1, 3, 0.66 or 1/1, 5 or 0.5 or 1/2, which corresponds to a higher resolution of the additional channels of At least 30%, at least 50% or at least 100% corresponds, and correspondingly lower focal length, the middle camera channel in such a standing-the configuration approximately the same height along the z-direction as lying on the two outer camera channels. Although this solution may possibly prevent a switching of the viewing direction of the central channel 16b, but can be compensated by a possibly further arrangement of an additional camera channel. The arrangement of an autofocus function and / or an optical image stabilization can be provided by individual arrangement of actuators. In other words, a large field of view "1" can be displayed "upright" with short focal length and / or lower magnification, and a smaller partial field of view "2" can be projected with longer focal length and / or greater magnification "lying and with folded beam path" "1" is already short, but allows a large field of view, but this can make the mirror big, while "2" 1 1 shows a schematic perspective view of a multi-aperture imaging apparatus 10 according to another embodiment, wherein a distance d

Documents

Application Documents

# Name Date
1 201937041116.pdf 2019-10-11
2 201937041116-STATEMENT OF UNDERTAKING (FORM 3) [11-10-2019(online)].pdf 2019-10-11
3 201937041116-FORM 1 [11-10-2019(online)].pdf 2019-10-11
4 201937041116-FIGURE OF ABSTRACT [11-10-2019(online)].pdf 2019-10-11
5 201937041116-DRAWINGS [11-10-2019(online)].pdf 2019-10-11
6 201937041116-DECLARATION OF INVENTORSHIP (FORM 5) [11-10-2019(online)].pdf 2019-10-11
7 201937041116-COMPLETE SPECIFICATION [11-10-2019(online)].pdf 2019-10-11
8 201937041116-FORM 18 [25-10-2019(online)].pdf 2019-10-25
9 201937041116-MARKED COPIES OF AMENDEMENTS [06-11-2019(online)].pdf 2019-11-06
10 201937041116-FORM 13 [06-11-2019(online)].pdf 2019-11-06
11 201937041116-Annexure [06-11-2019(online)].pdf 2019-11-06
12 201937041116-AMMENDED DOCUMENTS [06-11-2019(online)].pdf 2019-11-06
13 201937041116-FORM-26 [25-11-2019(online)].pdf 2019-11-25
14 201937041116-Information under section 8(2) (MANDATORY) [27-11-2019(online)].pdf 2019-11-27
15 201937041116-Proof of Right [10-02-2020(online)].pdf 2020-02-10
16 201937041116-Information under section 8(2) [25-02-2020(online)].pdf 2020-02-25
17 201937041116-Information under section 8(2) [13-08-2020(online)].pdf 2020-08-13
18 201937041116-Information under section 8(2) [03-10-2020(online)].pdf 2020-10-03
19 201937041116-Information under section 8(2) [22-12-2020(online)].pdf 2020-12-22
20 201937041116-Information under section 8(2) [02-02-2021(online)].pdf 2021-02-02
21 201937041116-Information under section 8(2) [17-02-2021(online)].pdf 2021-02-17
22 201937041116-Verified English translation [20-05-2021(online)].pdf 2021-05-20
23 201937041116-Information under section 8(2) [20-05-2021(online)].pdf 2021-05-20
24 201937041116-Information under section 8(2) [02-07-2021(online)].pdf 2021-07-02
25 201937041116-FER.pdf 2021-10-18
26 201937041116-AbandonedLetter.pdf 2024-06-28

Search Strategy

1 2021-02-1112-37-07E_11-02-2021.pdf