Sign In to Follow Application
View All Documents & Correspondence

System And Method For Acquiring Visible And Near Infrared Images By Means Of A Single Matrix Sensor

Abstract: System for simultaneous acquisition of colour and near-infrared images comprising a single matrix sensor comprising a first second and third type of pixels sensitive to respective visible colours and a fourth type of pixel panchromatic these pixels also being sensitive in the near-infrared; and a circuit for processing signals configured to: reconstruct a first set of monochromatic images on the basis of signals generated by the pixels of the first second and third type respectively; reconstruct a panchromatic image on the basis of signals generated by the pixels of the fourth type; reconstruct a second set of monochromatic images on the basis of signals generated by the pixels of the first second and third type and of said panchromatic image; reconstruct a colour image on the basis of the images of the first set and of the panchromatic image and reconstruct at least one near-infrared image on the basis of the images of the second set and of the panchromatic image. Visible - near-infrared bi-spectral camera comprising such an acquisition system and method implemented by means of such a camera.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
01 June 2018
Publication Number
33/2018
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2023-09-19
Renewal Date

Applicants

THALES
TOUR CARPE DIEM Place des Corolles Esplanade Nord 92400 Courbevoie

Inventors

1. HORAK, Raphaël
Thales Optronique S.a. 2 avenue Gay Lussac 78995 ELANCOURT Cedex
2. COURCOL, Yves
Thales Optronique S.a. 2 avenue Gay Lussac 78995 ELANCOURT Cedex
3. PERRUCHOT, Ludovic
Thales Optronique S.a. 2 avenue Gay Lussac 78995 ELANCOURT Cedex

Specification

The invention relates to a visible image acquisition system and in the near infrared to an infrared bi-spectral camera visible-near comprising such a system and a simultaneous method of acquiring color image and in the near infrared by using such a camera.

The "near infrared" ( "PIR" or "NIR", acronym of the English expression "Near InfraRed") corresponds to the spectral band 700-1 100 nm, while visible light extends between 350 and 700 nm. Sometimes it is considered that starts near infrared to 800 nm, the intermediate band 700-800 nm being eliminated using an optical filter.

The invention can be applied both in the areas of defense and security (eg for night vision) than in consumer electronics.

Conventionally images in visible light (hereinafter referred to as "visual images" for brevity), usually in color, and images in the near infrared are acquired independently by two different matrix sensors. To reduce clutter, these two sensors can be associated with a single image-forming optical system through a separating dichroic plate, so as to form a bi-spectral camera.

Such a configuration has a number of disadvantages. First, the use of two independent sensors and a beam splitter increases the cost, size, power consumption and weight of a bi-spectral camera, which is especially problematic in embedded applications such as airborne . In addition, the optical system has to be specially adapted for this application, which limits the use of optical possibilities of trade, increasing costs even more.

It was also proposed, mainly in academic type of work, to use a single array sensor for acquiring images visible and near infrared. In fact, silicon sensors widely used in digital cameras have a sensitivity which ranges from visible to near infrared; thereby cameras intended to act only in the visible are equipped with an optical filter to prevent contamination of the image by the infrared component.

The documents :

D. Kiku et al. « Simultaneously Capturing of RGB and Additional Band Images using Hybrid Color Filter Array », Proc. of SPIE-IS&T Electronic Imaging, SPIE Vol. 9023 (2014); et

- US 8,619,143

describe matrix sensors having pixels of four different types: the pixels sensitive to blue light, green light and red light as in the "RGB" sensors conventional, but also pixels "gray", responsive only to radiation PIR. Conventionally, these types of pixels are obtained by the deposition of absorptive filters, forming a color filter array, on the elementary sensors in silicon which are, by themselves, "panchromatic" ie sensitive to the whole visible and near infrared band. Generally, the pigments used to make these filters are transparent in the near infrared; the images acquired by pixels "red", "green" and "blue" are thus affected by the infrared component of the incident light (because the optical filter used in the conventional cameras is obviously absent) and a digital processing is necessary to retrieve colors realistic.

The documents :

Z. Sadeghipoor et al. « Designing Color Filter Arrays for the Joint Capture of Visible and Near-lnfrared images », 1 6th IEEE Conférence on Image Processing (2009);

Z. Sadeghipoor et al. « Correlation-Based Joint Acquisition and Demosaicing of Visible and Near-lnfrared Images », 18th IEEE Conférence on Image Processing (201 1 )

describe sensors having a filter matrix more complex colors, intended to optimize the reconstruction of the visible and infrared images.

Conversely, the document

Z. Sadeghipoor et al. « A Novel Compressive Sensing Approach to Simultaneously Acquire Color and Near Infrared Images on a Single Sensor » Proc. IEEE International Conférence on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, Canada (2013)

discloses a sensor with a color filter array is near, but slightly different, of the said matrix "Bayer", which is most commonly used in color cameras. A conventional Bayer pattern would not separate the visible and infrared components.

These approaches use sensors in which all pixels are equipped with a spectral filter. However, in order to achieve high sensitivity cameras, it is advantageous to use sensors also including panchromatic pixels, without filters. In some cases it even uses sensors called "scattered" ( "sparse" in English) having a high percentage of panchromatic pixels to capture most of the incident radiation. These sensors exploit the fact that the chroma of an image can be sub-sampled compared to its luminance without an observer perceives a significant deterioration in its quality.

The document :

D. Hertel et al. « A low-cost VIS-NIR true color night vision video System based on a wide dynamic range CMOS imager », IEEE Intelligent Vehicles Symposium, 2009, pages 273 - 278;

discloses the use of a sensor comprising the colored pixels and panchromatic pixels for the simultaneous acquisition of images visible and near infrared. The construction method PIR pictures

is not explained and no examples of such images is not shown; only monochromatic images are shown "full band" usable in low light conditions. Furthermore, this article concerns only the case of a "RGBM" sensor, which contains only 25% panchromatic pixels, which greatly limits the gain in sensitivity can be achieved.

The invention aims to overcome the above drawbacks of the prior art. More specifically, it aims to provide simultaneous acquisition system visible images and PIR with high sensitivity and to obtain high quality images. Preferably, the invention allows the use of matrix sensors and optical systems of trade ( "components taken off the shelf" or COTS English "off-the-shelf commercial").

According to the invention, a high sensitivity is achieved by the use of sensors comprising both colored pixels (or pixel pitch comprising "gray" sensitive PIR) and panchromatic pixels rather high number (preferably greater than in number to a quarter of the pixels), and preferably scattered sensors; the high image quality is achieved through the implementation of an innovative digital processing. Specifically, this treatment involves the reconstruction of a panchromatic image and two images visible "intermediary" in color. These two intermediate Visible images are obtained by means of two different treatments: the first, which may be called "intra-channel" only uses the signals from the colored pixels while the other, which can be called "cross-channel", also operates the panchromatic image. The NIR image is obtained from the color image "cross-channel" and the panchromatic image, while the color image is obtained by combining the intermediate image "in-channel" using the picture panchromatic.

An object of the invention is an image acquisition system comprising: an array sensor comprising a two-dimensional arrangement of pixels, each pixel being adapted to generate a signal

Representative electrical light intensity at a point on an optical image of a scene; and a signal processing circuit configured to process the electrical signals generated by said pixel to generate digital images of said scene; wherein said array sensor comprises a two dimensional array: pixels, said color of at least a first type responsive to visible light in a first spectral range; a second type responsive to visible light in a second spectral range different from the first; and a third type, sensitive to visible light in a third spectral range different from the first and the second, a combination of the spectral ranges of the different types of colored pixels reconstructing the entire visible spectrum; and pixels, said panchromatic sensitive to the entire visible spectrum, at least the panchromatic pixels are also sensitive to near infrared; characterized in that said signal processing circuit is configured to: reconstruct a first set of monochromatic images from the electrical signals generated by the colored pixels; reconstructing a panchromatic image from the electrical signals generated by the panchromatic pixels; reconstructing a second set of monochromatic images from the electric signals generated by the colored pixels, and said panchromatic image; reconstructing a color image by applying a first color measurement matrix the monochromatic images of the first set and said panchromatic image; reconstructing at least one image in the near infrared by applying a second color matching array of at least the monochromatic images of the second set and said panchromatic image; and outputting said color image and said one or said at least one image in the near infrared.

Preferably, the images of the first set are obtained only from the electric signals generated by the colored pixels, without contribution of the electrical signals generated by the panchromatic pixels. In addition, the first colorimetric matrix is ​​preferably such that the color image outputted is substantially free of contributions from an infrared component of the optical image, while the second set of pictures will generally include such contributions.

According to particular embodiments of the invention: - Said colored pixels may include only pixels of said first, second and third type, which are also sensitive to the near infrared. More specifically, pixels of one of the first, second and third type can be sensitive to green light, those of another of the first, the second and the third type being sensitive to blue light and those of the Type remaining one of the first, second and third type to be sensitive to red light.

Said matrix sensor can be scattered type over a quarter and preferably at least half of its pixels being panchromatic.

- Said signal processing circuit may be configured to reconstruct the monochromatic images of said first set by applying a method comprising the steps of: determining the light intensity associated with each pixel of said first type and reconstructing a first monochromatic image of said first set said interpolated luminous intensities; determining the light intensity associated with each color pixel of the other types, and subtract a value representative of the intensity associated with a corresponding pixel of said first monochromatic image; rebuild new monochromatic images by interpolation of the brightness values ​​of respective color pixels of said other types, which were subtracted from said values ​​representative of the intensity associated with a pixel corresponding to said first monochromatic image, and then combine these new images reconstructed with said first monochromatic image for respective final monochromatic images of said first set.

Said signal processing circuit may be configured to reconstruct said panchromatic image by interpolation of the electrical signals generated by the panchromatic pixels.

Said signal processing circuit may be configured to reconstruct the monochromatic images of said second set by calculating the luminance level of each pixel of each said image by applying a linear function defined locally, the luminance of the corresponding pixel in the panchromatic image.

Alternatively, said signal processing circuit may be configured to reconstruct the monochromatic images of said second set by calculating the luminance level of each pixel of each said image by means of a non-linear function of the luminance levels of a plurality of pixels of the panchromatic image in a neighborhood of the pixel panchromatic milt picture corresponding to said pixel of said second set of image and / or the light intensity of a plurality of colored pixels.

Said matrix sensor may consist of a periodic repetition of blocks containing pseudo-random distributions of pixels of different types and wherein said signal processing circuitry is configured to: extract regular patterns of pixels of the same type of said matrix sensor; and reconstructing said first and second sets of images monochromatic by treating in parallel said same regular patterns types of pixels.

Said signal processing circuit may also be configured to reconstruct a monochromatic image with low brightness level by applying a third colorimetric matrix at least the monochromatic images of the second set and said panchromatic image.

Said matrix sensor may also comprise a two-dimensional pixel arrangement only sensitive to the near infrared, and wherein said signal processing circuit is configured to reconstruct said image in the near infrared also from the electrical signals generated by these pixels.

The system may also include an actuator for generating a periodic relative movement between the framing sensor and the optical image matrix the sensor being adapted to reconstruct said first and second sets of said monochromatic images, and panchromatic image from electrical signals generated by the pixels of the matrix sensor corresponding to a plurality of discrete relative positions of the matrix sensor and of the optical image.

Said signal processing circuit can be made of a programmable logic circuit.

Another object of the invention is a bi-spectral camera visible - near infrared comprising such an image acquisition system and an optical system adapted to form an optical image of a scene on a matrix sensor of the acquisition system images, without filtering the near infrared.

Still another object of the invention is a method for simultaneous acquisition of color images and near infrared by use of such a bi-spectral camera.

Other features, details and advantages of the invention will become apparent from reading the description given with reference to the accompanying drawings given by way of example and which represent, respectively:

Figure 1A is a block diagram of a camera according to an embodiment of the invention;

FIG 1 B, graphs illustrating the spectral response of the pixels of the framing sensor of the camera of Figure 1 A;

Figure 2 is a block diagram of the processing implemented by the camera processor of Figure 1 A according to one embodiment of the invention;

3A to 6C are diagrams illustrating various processing steps of Figure 2;

7A and 7B, two diagrams showing two variants of a processing implemented in a particular embodiment of the invention;

Figure 8, a matrix sensor of an image acquisition system according to another variant of the invention;

9A and 9B, images illustrating the technical results of the invention.

Figure 1A shows the block diagram, highly simplified, of a bi-spectral camera visible - near infrared CBS according to one embodiment of the invention. CBS camera comprises an optical system SO, generally based on lenses, which forms an optical image 10 of a scene observed SO. The image 10 is formed on the surface of a matrix sensor CM, comprising a two-dimensional pixel arrangement; Each pixel produces an electrical signal representative of the light intensity of the point (actually a small region) corresponding to the optical image, weighted by its spectral sensitivity curve. These electrical signals, generally after having been converted to digital format, are processed by an STC processing circuit which provides at its output digital images: a first color image (consisting of three monochromatic images of different colors) I am and a second picture, monochromatic, near-infrared. Optionally, the STC processing circuit may output also visible monochromatic image I B NL, useful in particular when the observed scene has a low brightness level. The matrix sensor CM and CTS processing circuit constitute what is referred to hereinafter as an image acquisition system. The CTS circuit may be a microprocessor, preferably specialized for digital signal processing (DSP of "Digital Signal Processor") scheduled in a timely manner, or a dedicated digital circuit, produced for example from a programmable logic circuit such as an FPGA; however, it can also be a specific integrated circuit (ASIC English "Application

Specific Integrated Circuit).

The framing sensor can be of CCD or CMOS type; in the latter case it may incorporate an analog - digital converter to provide digital signals directly to its output. In any case, it comprises at least four different types of pixels: three first types sensitive to spectral bands corresponding to colors which, when mixed, restore the white of the visible spectral band (typically red, green and blue) and a fourth type "panchromatic". In a preferred embodiment, all these types of pixels also have a non-zero sensitivity in the near infrared - which is the case with silicon sensors. This sensitivity in the near infrared is generally considered a nuisance, and removed using an optical filter, but is operated by the invention. Advantageously, all the pixels have the same structure, and differ only by a filter coating on their surface (not in the case of panchromatic pixels), generally based on polymers. Figure 1 B shows the red pixel sensitivity curves (R PB), green (V PB ), blue (B PB ) and panchromatic (M PB ). Note that the sensitivity in the near infrared is much the same for the four types of pixels. The index "PB" means "full band" to indicate that the infrared component of incident radiation has not been filtered.

As will be explained in detail below with reference to Figure 8, the sensor may also include pixels of a fifth type, sensitive only to the near infrared.

FIG identifies the visible spectral regions (350-700 nm) and near infrared VIS (800-1 100) PIR. The intermediate band (700-800 nm) can be filtered, but it is not advantageous in the case of the invention; more usefully, it can be considered near infrared.

Advantageously, the matrix CM sensor can be "sparse", meaning that the panchromatic pixels are at least as many, and preferably more numerous than those of each of the three colors. Advantageously, at least half of the pixels are

panchromatic. This improves the sensitivity of the sensor because the panchromatic pixels, having no filter, receive more light than the colored pixels.

The arrangement of the pixels may be pseudo-random, but is preferably regular (that is to say, periodical according to the two spatial dimensions) to facilitate image processing operations. It may especially be a periodicity of a random pattern, that is to say, a periodic repetition of blocks within which the pixels are distributed pseudo-randomly.

For example, the left side of Figure 3A shows the diagram of a matrix sensor in which every other pixel is of the panchromatic Type (CT) is a four green (PV), an eight red ( RA) and one in eight is blue (PB). The fact that the green pixels are more numerous than red or blue in this example reflects the fact that the human eye has a peak sensitivity for that color, but this proportion between green pixels, blue, and red is not compulsory in the invention (it runs any existing proportion between these different types of pixels).

It is also possible to use a sensor obtained by the regular repetition of a MxN-dimensional block containing a distribution of colored pixels and panchromatic pseudo-random (but with a controlled distribution of these different types of pixels).

For example, Figure 3B shows a sensor pattern, said "6 1/1" ( "1 / N" meaning that a pixel on N is blue or red), for which distinguishes 4x4 repeated pixel patterns, in which are randomly distributed type 12 panchromatic pixels, one red pixel type 1 blue pixel type 2 and type of green pixels. although several random distribution pattern configurations are possible that it comprises either in Figure 3A (sparse type "1/8") or 3B (sparse type "6 1/1") .. Some patterns may be selected preferentially for their performance on the output images with the treatments that will be described later.

It is also possible to use more than three types of colored pixels, having different sensitivity bands to obtain a plurality of visible images (and, where appropriate, in the near infrared) monochromatic corresponding to these bands. One can thus obtain hyperspectral images.

Moreover, it is not essential that the colored pixels (or all of them) are sensitive to the near infrared: it may be sufficient that the panchromatic pixels are.

2 schematically illustrates the processing implemented by the CTS processing circuit. The different steps of this process will be detailed later with reference to 3A to 6C.

The CTS system receives as input a set of digital signals representing light intensity values detected by the individual pixels of the sensor matrix CM. In the figure, this set of signals is referred to as "patchy picture full band." The first processing operation consists in extracting from this set the signals corresponding to pixels of different types. Whereas in the case of a regular array of MxN blocks of pixels (M> 1 and / or N> 1), one speaks of extraction "grounds" R PB (full red band), V PB (solid green tape ), B PB (blue full band) and M PB (panchromatic full band). These patterns ( "patterns" in English) correspond to image sub-sampled or "holes"; it is therefore necessary to conduct a comprehensive reconstruction of images, not sampled the matrix sensor.

The reconstruction of a full band panchromatic image

IM PB is the most simple operation, especially when the panchromatic pixels are the most numerous. Such is the case of Figure 3, wherein every other pixel is panchromatic. The corresponding light intensity to pixels "missing" (that is to say blue, green or red and therefore not directly used to reconstruct the panchromatic image) can be calculated, eg by a simple bilinear interpolation. The method which has proved most effective is that known as "median" each pixel "missing" is assigned a light intensity value which is the middle of the light intensity values measured by the panchromatic pixels constituting its most close neighbors.

Reconstruction colorful images (red, green, blue) full band is performed twice, using two different methods. One method is called "intra-channel" because it uses only the colored pixels to reconstruct the colorful images; a second method is called "cross-channel" because it also uses the information from the panchromatic pixels. Examples of such methods are described below, with reference to Figures 4A - 4D (intra-channel) and 5 (cross-channel). Conventionally, only the intra-channel methods are used.

The full-band images, whether obtained by an intra-channel or cross-channel method can not be used directly because they are "contaminated" by the PIR component (near infrared) unfiltered by the optical system. PIR this component can be eliminated by combining the images full-band IR PB, IV PB , IB bp obtained by the intra-channel method with the full band panchromatic image with a first color matching matrix (MCOM reference in Figure 6A ). A visible image is thus obtained in the colors V is composed of three sub-images monochromatic IR, IV, IB red, green and blue, respectively.

Images full IR band PB *, IV * PB , IB * bp obtained by the interchannel method are also combined with the panchromatic image full-band using a second colorimetric matrix (MCol2 reference in Figure 6B). The elements of this matrix, serving as the coefficients of the combination are selected to allow obtaining an image in the near infrared NIR I (or, more precisely, a monochromatic image, usually black and white, representative of the luminance frame 10 in the spectral range of the near infrared).

Optionally, the combination of the IR full-band image PB *, IV * PB, IB * p B obtained by the cross-channels with the panchromatic image method full band by means of a third matrix colorimetry (MCol3 reference in Figure 6C ) provides an image

I monochromatic BNL representative of the luminance of the image 10 in the whole visible spectral range, but without pollution by the infrared components. As it operates the signals from the panchromatic sensors, more and without filter, the image may be brighter than the color image Ivis and therefore particularly suitable for low light conditions. It is important to note that the image I BNL contains no contribution in the near infrared, which is virtually filtered digitally. For cons, the aforementioned article by D. Hertel et al. describes obtaining an image "low level of light" that combines images in the visible and near infrared. Such an image is visually different from a purely in the visible image, such as I B NL - In some cases, one might be interested only in the image in the near infrared NIR I and optionally by the visible image monochromatic low brightness I B NL- in these cases, it would not be necessary to implement the intra-channel reconstruction method.

An advantageous method of image reconstruction "intra-channel" will now be described with reference to Figures 4A - 4D. This description is given only as an example for many other methods known in the literature (diffusion approaches, wavelet, by constant hue ...) may be suitable for the implementation of the invention.

In the matrix sensor of Figure 3A, the blue pixels are grouped into sub-units (or sub-patterns) regular. MPB pattern formed by those pixels can be decomposed into two subunits SMPB1, SMPB2 mutually identical but spatially shifted; each of these subunits has four blue pixels at the corners of a square of 5 x 5 pixels; shown in Figure 4A. This decomposition process in regular subunits is applicable to different types of pixels regardless of the random arrangement thereof set in a block of MxN pixels. Especially when the CTS processing circuit is realized by means of a dedicated digital circuit, it is advantageous to decompose the MPV pattern subunits that can be processed in parallel.

4B illustrates a case where the pattern of green pixels is divided into four subunits SMPV1, SMPV2, SMPV3 and SMPV4. Complete green monochromatic images, IV PB 1 , IV PB 2 , IV PB 3 , IV PB 4 , are obtained by bilinear interpolation of these subunits. Then a monochromatic green image "in full band" IV PB is obtained by averaging.

The reconstruction of the red and blue images full band is a little more complex. It is based on a method similar to the "color constancy" described in US 4,642,678.

First, it subtracts the full band green image IV PB units of red and blue pixels. More specifically this means that we subtract the signal from each red or blue pixel a value representative of the intensity of the corresponding pixel of the green image in full band IV PB . The pattern of red pixels is split into two subunits SMPR1, SMPR2; after subtracting the modified subunits is obtained SMPR1 'SMPR2'; The same pattern of blue pixels is split into two subunits SMPB1, SMPB2; after subtracting the modified subunits is obtained SMPB1 'SMPB2. This is illustrated in Figure 4C.

Then, as illustrated in Figure 4D, IR1 'images PB , IR2' PB are obtained by bilinear interpolation of the red sub-patterns, and averaged to provide a modified red image IR PB . Similarly, images of IB1 ' PB , IB2' PB are obtained by bilinear interpolation of the red sub-patterns, and averaged to provide a blue image modified IB PB .

The red image full band IR PB and blue image full band IB PB are obtained by adding the green image full band IV PB to the red and blue images modified IR PB ', IB PB .

The advantage of proceeding in this way by subtracting the reconstructed green image of the patterns of red and blue pixels to add to the end of treatment, is that the modified units have a dynamic low intensity, which reduces interpolation errors. The problem is less acute for green, which is sampled more finely.

The inter-channel reconstruction is performed differently. It explicitly uses the panchromatic pixels, unlike the intra-channel reconstruction that exploits only the red pixels, green, blue. For example, it can be achieved by means of an algorithm that can be described as "Monochrome law", which is illustrated using Figures 5A and 5B. The idea behind this algorithm is that the color components - green, red and blue - generally exhibit spatial variations which "follow" approximate those of the panchromatic component. We can therefore use the panchromatic component reconstituted by bilinear interpolation taking advantage of its more dense spatial sampling, to calculate the luminance level of the pixels of the missing color components. More particularly, the luminance level of each color pixel can be determined by applying a linear function or affine to the luminance of the corresponding pixel of the panchromatic image. The linear or affine function in question is determined locally and depends on the level of luminance of the colored pixels near already known (as measured directly or already calculated).

5A refers to the case of a sparse sensor in which only every fourth row contains blue pixels; in these lines, one pixel in four is blue. It also has a panchromatic image "full", defined for all pixels of the sensor, obtained in the manner described above with reference to Figure 3. Figure 5B shows a segment of the matrix sensor comprising two blue pixels separated by three pixels for which the blue component is not defined; superimposed on this line portion, it has a portion of panchromatic image, defined on all the pixels. By Ci denotes and C 5 known luminances of the two blue pixels (more generally, colored) at the ends of the line segment, by C 2 , C 3 and C 4 luminances - calculating - of the blue component in correspondence of three intermediate pixels, and Mi - M 5 luminances known, the panchromatic image corresponding to the same pixels.

The first step of the method is to reconstruct the lines of the blue component of the image using the panchromatic image; only rows containing the blue pixels are reconstructed in this way, a line of four. At the end of this step, with complete blue lines, separated by lines in which the blue component is not defined. If we look at the columns, we see that in each column one pixel in four is blue. We can reconstruct blue columns by interpolation aided by the knowledge of the panchromatic image, as it did for the lines. The procedure is the same for the green and red components.

The application of the law to rebuild monochrome colored component of the image takes place as follows.

We are interested in pixels Mi to M 5 reconstituted panchromatic image that are between two pixels Ci and C 5 of the reason for the color considered, including Mi extreme pixels, M 5 that are co-located with these two colored pixels. Then it is determined whether the corresponding portion of the panchromatic image can be considered uniform. To do this, we compare the total variation in luminance between panchromatic -i M and M 5 a threshold Th If | M. 5 - Mi | Th), then one can directly reconstruct the color pixels C 2 - C 4 by applying a "monochrome law 'c' is to say, of the linear function expressing C, depending on M, (i = 1 to 5) and such that the calculated values of C and C 5 coincide with the measured values:

Ci = m · Mi + p

î with o

and

p = C1— m · M1

Again, if the luminance is to be an integer value, the calculated result is rounded.

Reconstruction by direct application of the monochrome law can lead to a too large dynamic of the colored component rebuilt, or to its saturation. In this case, it may be appropriate to return to a reconstruction step by step. For example, too much dynamic condition can be recognized when

where Th1 is a threshold, generally different from Th.

Saturation can be recognized if min (Ci) <0 or if Max (Ci) exceeds a permissible maximum value (65535 if one considers the case of a luminance expressed by a whole number of 1 coded 6 bits).

Of course, the configuration of Figures 5A and 5B - which involves reconstruction of color components in segments of 5

pixels - is given solely by way of example and the generalization of the method to other configurations do not pose significant difficulties.

Variations of this method are possible. For example a different approach to determine the luminance of the colored pixels with the help of both colored pixels close (located some distance depending on the pattern of the colored pixels in question) and panchromatic pixels reconstituted neighbors is to use functions nonlinear approaching the distribution of the colored pixels by using, for example, a polynomial approximation (and more generally an approximation of a nonlinear function of spatial surface) of neighboring panchromatic pixels. The advantage of these single-axis non-linear functions or bi-planar axes contrary is that they take into account the distribution of the colored pixels in a larger scale than the nearest colored pixels of the pixel one wants rebuild. In this framework of ideas, one can also use more general functions scattering values ​​that exploit local gradients and sudden jumps appearing in the luminance value panchromatic pixels. Whatever the method used, the principle remains the same: using panchromatic pixels that are more than the colored pixels, and the law of variation thereof to rebuild the colored pixels.

While the method of monochrome law implies an approach to 2 single axis successive passes, using surface or diffusion equations functions allows for a one-pass approach to rebuilding the colored pixels.

At this stage of processing is available a full band panchromatic image, ΙΜ ΡΒ , and two sets of three full-band monochromatic images (I RPB, IV PB , IB PB ) and (IR * PB, IV * PB , IB * PB ). As mentioned above, none of these images is directly usable. However, a visible light color image I screw can be obtained by combining the full-band image of the first set (IR PB , IV PB , IB PB ) and the full band panchromatic image IM PB through a die colorimetry 3x4, MCOM. More specifically, the component

IR red of the visible image I am is given by a linear combination of I RPB, I PB, PB and IB IMPB with coefficients an, I 2 , I 3 and I 4 . Similarly IV green component is given by a linear combination of IR PB , IV PB , IB PB and IM pB with coefficients a 2 i, a 2 2, a 2 3, and 24 , and IB blue component is given by a linear combination of IM PB , IV PB , IB PB and IR PB with coefficients a 3 i, a 32 , a 3 3 and 34 . This is illustrated by Figure 6A.

Then, the visible image the V is can be improved by a white balance operation, conventional, to take account of the difference of illumination of the scene from that used to determine the coefficients of colorimetry matrix.

Similarly, an image in the near infrared P | R can be obtained by combining the full-band image of the second set (IR PB * , IV PB " , IB PB * ) and the panchromatic image IM full band PB through a second die colorimetry 1 x4, MCol2 . in other words the image in the near infrared P | R is given by a linear combination of IV PB * , IB PB * , IR PB * and IR PB with coefficients a 4 i, a 42 , a 43 and 44 . This is illustrated by Figure 6B.

If several types of pixels having different spectral sensitivities in the near infrared, it is possible to obtain a plurality of images different in the near infrared, corresponding to N P | R different spectral subbands (with NR R > 1). In this case, the second matrix colorimetry MCol2 becomes a matrix N P | R x (N P | R 3), N P | R is the number of images in the near infrared that is desired. The case discussed above is the special case where N P | R = 1.

Then, the image in the near infrared IR R can be improved by a spatial filtering operation, conventional. This can be for example a raising operation of contours with or without an adaptive noise filtering (among the possible techniques for edge enhancement include the step of passing the image convolution pass filter above).

The MCOM and MCol2 matrices are in fact sub-matrices of the same matrix colorimetry "A" 4x4 dimensions in the case

particularly where N P | R = 1 and where there are three types of color pixels, which is not used as such.

The size of colorimetry dies must be changed if the matrix sensor has more than three different types of colored pixels. For example, as for the N P | R pixels having different spectral sensitivities in the infrared, there may be N V is (with N V is≥3) pixel types sensitive to different sub-bands in the visible and more unfiltered pixels panchromatic. These N V is pixel types can also have different spectral sensitivities in the near infrared to allow the acquisition of N P | R PI image A. Assuming there are no pixels only sensitive in the infrared, colorimetry matrix MCOM is then 3x dimension (N V is + 1) - In addition, the visible monochromatic image I B NL can be obtained by combining the full-band image of the second set (IV PB * , IB PB "IR PB ) and the full band panchromatic image IM PB through a second die colorimetry 1 x4, MCol3. in other words the I picture B NL is given by a linear combination of IR PB * , IV PB ", IB bp * and IM PB with coefficients â 4LJ â 42 â 43 , â 44 which form the last line of another 4x4 matrix colorimetry "Â" which is not used as such. This is illustrated by Figure 6C.

The image I BN L can in turn be improved by spatial filtering operation, conventional.

Colorimetry matrices A and A may be obtained by a calibration method. This consists, for example using a pattern in which various reflective paints in the visible and the R PI were filed, illuminating the device through a controlled illumination and compare the theoretical values ​​of luminance that should have these paintings in the visible and R PI with those measured using a colorimetry 4x4 matrix which is adapted better to the coefficients by a least squares. Colorimetry matrix can also be improved by weighting the colors you want to emphasize in a special way or by adding measurements of natural objects in the

scene. The proposed method (PIR operation in addition to the color, the use of 4x4 matrix, the use of different paints emitting both in the visible and NIR) differs from conventional methods confined to color, exploiting a 3x3 matrix and a conventional pattern such that the "X-Rite checkerboard" or matrix Macbeth.

7A and 7B illustrate a possible improvement of the invention, implementing a micro-scanning of the sensor. The microscanning consists of a periodic motion (oscillation) of the matrix sensor in the image plane or the image relative to the sensor. This displacement may be achieved by a piezoelectric actuator or motor type DC acting on the framing sensor or at least one optical element imaging. Several (usually two) image acquisitions are performed during this movement, which improves spatial sampling and thus facilitate the reconstruction of the images. Of course, this requires a rate of acquisition higher than if you did not use the microwave scanning.

In the example of Figure 7A, the sensor matrix CM has a particular structure; every other column consists of panchromatic pixels PM, one in four of alternating PV green and blue pixels PB, and one in four of alternating red pixel PR and green PV. The micro-scan is effected by means of an equal oscillation amplitude to the width of a pixel, in a direction perpendicular to that of the columns. It can be seen that the space occupied by a line "panchromatic" when the sensor is located at one end of its movement is occupied by a line "colored" when it is at the opposite end and vice versa. The image acquisition rate is higher than that achieved without microscanning (e.g. frequency twice the frequency obtained without microscanning) this to be able to add information to the images acquired without microscanning.

Taking the example of a double acquisition frequency, from two images acquired in correspondence with two positions

extreme opposite of the sensor (left side of Figure 7A) it is possible to reconstruct (right part of the figure):

a color image formed by the repetition of a pattern of four pixels - two green arranged in a diagonal, one blue and one red (called matrix "Bayer"), formed by the reconstructed pixel having an elongated shape in the direction of displacement with a 2 aspect ratio; and

a panchromatic image "full", used directly without the need for interpolation;

these two reconstructed images are acquired at a rate less than twice the acquisition frequency.

Indeed, the complete micro-scanning the information of panchromatic and color pixels, and treatments shown in the context of the present invention can directly be applied to the patterns produced from the micro-scanning detector before and additional patterns obtained after subfield, it is not necessary to use a specific pattern as shown in Figure 7A, it can however be applied with simplified algorithms. Furthermore, the micro-scanning may be performed along two axes and / or have an amplitude greater than the width of a pixel.

For example, Figure 7B illustrates the application of microscan to CM sensor of Figure 3A. In this exemplary sparse configuration, type 1/8, the micro-scanning involves the recovery of all the panchromatic pixels, adding two blue sub-patterns, two red and green 4 subunits subunits superimposed the previously panchromatic and places that allow total to apply treatments on 2 times more than initially subunits. In more complex weather patterns (1 / N with N> = 1 6), we can generalize the following micro-scanning M positions the two dimensions of the matrix simultaneously, and gain a factor M on the number of subunits.

So far, only the case of a matrix sensor includes exactly four types of pixels - red, green, blue and

panchromatic - was considered, but it is not is an essential limitation. It is possible to use three types of colored pixels or more, having different sensitivity curves to those shown in Figure 1 B. In addition, it is possible to use a fifth pixel type, sensitive only to radiation in the near infrared. For example, Figure 8 shows a matrix sensor in which a pixel is four panchromatic (PM reference), one pixel in four is only sensitive to the near infrared (PI), one pixel in four is green (PV), a pixel on eight red (PR) and a pixel in eight blue (PB).

The signals from the pixels of the fifth type may be used in different ways. For example it is possible to reconstruct, for the "in-channel" method of 4A - 4D, an image in the near infrared, designated for example by P | R D (superscript "D" meaning that it is an image acquired "directly"), which can be averaged with the image P | R obtained by applying colorimetry MCol2 matrix IR image PB *, IV PB *, IB bp * and ΙΜ ΡΒ . It is also possible to use a MCol2 colorimetry matrix of dimensions 1 x5 and to obtain the image in the near infrared I RR by a linear combination of IR PB * , IV PB ", IB PB " IM PB and I RR d with matrix coefficients a 4 i, a 42 , a 43 , a 44 and a 45 . The image I P IR d can also be used to calibrate the colorimetric MCOM matrix which now contains one more column and becomes 3x size (N V is + 2): each component red, green, blue reproduced is then expressed based on N V is planes reproduced in full band, the reconstructed panchromatic plane and plane PI R reconstructed from the pixels P | R D .

9A is a composite image of a scene observed by visible light. The left part of the image, I am, was obtained according to the invention, using the matrix sensor "scattered" in Figure 3. The right side, the screw is obtained by a conventional method, using a non-scattered sensor. The quality of both images is comparable.

9B is a composite image of the same scene, observed in the near infrared. The left part of the image, RR I was obtained according to the invention, using the matrix sensor

"Mostly" in Figure 3. The right side, I PIR was obtained using a PI R camera with a non-scattered sensor. The images are of comparable quality, but the image I P | R obtained according to the invention is brighter thanks to the use of a scattered sensor.

WE CLAIM

1. image acquisition system (IAS) comprising:

a matrix sensor (CM) comprising a two-dimensional arrangement of pixels, each pixel being adapted to generate an electrical signal representative of the light intensity at a point on an optical image (IO) of a scene (SC); and

a signal processing circuit (CTS) configured to process the electrical signals generated by said pixel to generate digital images (I am, I PIR) of said scene;

wherein said array sensor comprises a two dimensional array:

of pixels, said color of at least a first type (PV), responsive to visible light in a first spectral range; a second type (PB), responsive to visible light in a second spectral range different from the first; and a third type (PR), responsive to visible light in a third spectral range different from the first and the second, a combination of the spectral ranges of the different types of colored pixels reconstructing the entire visible spectrum;

- and pixels, said panchromatic (CT) sensitive to the entire visible spectrum,

at least panchromatic pixels being also sensitive to the near infrared;

characterized in that said signal processing circuit is configured to:

reconstructing a first set of monochromatic images (IV PB , IB PB , I RPB) from the electrical signals generated by the colored pixels;

reconstructing a panchromatic image (ΙΜ ΡΒ ) from the electrical signals generated by the panchromatic pixels;

reconstructing a second set of monochromatic images (IV * PB , IB * PB , PB R *) from the electrical signals generated by the colored pixels, and said panchromatic image;

reconstructing a color image (I am) by applying a first color matching matrix (MCoH) the monochromatic images of the first set and said panchromatic image;

reconstructing at least one image in the near infrared (PIR I) by applying a second matrix (MCol2) colorimetry at least the monochromatic images of the second set and said panchromatic image; and

outputting said color image and said one or said at least one image in the near infrared.

2. An image acquisition system according to claim 1, wherein said colored pixels include only pixels of said first, second and third type, which are also sensitive to the near infrared.

3. An image acquisition system according to claim 1 or 2, wherein the pixels of one of the first, second and third type are sensitive to green light, those of another of the first, second and third type are sensitive to blue light and those of the type remaining one of the first, second and third type are sensitive to red light.

4. An image acquisition system according to one of the preceding claims wherein said array sensor is scattered type over a quarter and preferably at least half of its pixels being panchromatic.

5. An image acquisition system according to one of the preceding claims, wherein said signal processing circuit is configured to reconstruct the monochromatic images of said first set by applying a method comprising the steps of:

- determining the light intensity associated with each pixel of said first type and reconstructing a first monochromatic image (IV PB ) of said first set by interpolating said luminous intensity;

determining the light intensity associated with each color pixel of the other types, and subtract a value representative of the intensity associated with a corresponding pixel of said first monochromatic image;

rebuild new monochromatic images (IB PB, IR PB) by interpolation of the brightness values of respective color pixels of said other types, which were subtracted from said values representative of the intensity associated with a corresponding pixel of said first image monochromatic and then combine these new images reconstructed with said first monochromatic image (IV PB ) for respective final monochromatic images (IB PB , IR PB) of said first set.

6. An image acquisition system according to one of the preceding claims, wherein said signal processing circuit is configured to reconstruct said panchromatic image by interpolation of the electrical signals generated by the panchromatic pixels.

7. An image acquisition system according to one of the preceding claims, wherein said signal processing circuit is configured to reconstruct the monochromatic images of said second set by calculating the luminance level of each pixel of each said image by applying a linear function, defined locally, the luminance of the corresponding pixel in the panchromatic image.

8. An image acquisition system according to one of claims 1 to 6, wherein said signal processing circuit is configured to reconstruct the monochromatic images of said second set by calculating the luminance level of each pixel of each said picture by means of a non-linear function of the luminance levels of a plurality of pixels of the panchromatic image in a neighborhood of the pixel panchromatic milt picture corresponding to said pixel of said second set of image and / or the light intensity a plurality of colored pixels.

9. The system of acquisition of images according to one of the preceding claims, wherein said matrix sensor consists of a periodic repetition of blocks containing pseudo-random distributions of pixels of different types and wherein said processing circuit signals is configured to:

extract regular patterns of pixels of the same type of said matrix sensor; and

reconstructing said first set and second set of monochromatic images by processing in parallel said same regular patterns types of pixels.

10. An image acquisition system according to one of the preceding claims, wherein said signal processing circuit is further configured to reconstruct a monochromatic image with low brightness level by applying a third matrix (MCol3) colorimetry at least the monochromatic images of the second set and said panchromatic image.

January 1. image acquisition system according to one of the preceding claims, wherein said array sensor (CM) further comprises a two dimensional array of pixels (PI)

only sensitive to the near infrared, and wherein said signal processing circuit is configured to reconstruct said image in the near infrared (l P | R ) also from the electrical signals generated by these pixels.

12. An image acquisition system according to one of the preceding claims further comprising an actuator for generating a periodic relative movement between the framing sensor and the optical image, wherein the matrix sensor is adapted to reconstruct said first and second sets said monochromatic images and panchromatic image from electrical signals generated by the pixels of the matrix sensor corresponding to a plurality of discrete relative positions of the matrix sensor and of the optical image.

13. An image acquisition system according to one of the preceding claims, wherein said signal processing circuit is formed from a programmable logic circuit.

14. bi-spectral camera visible - near infrared (CBS) comprising:

an image acquisition system (NOS) according to any preceding claim; and

an optical system (SO) adapted to form an optical image (IO) of a scene (SC) on a matrix sensor (SC) of such an image acquisition system, without filtering the near infrared.

15. A method for simultaneous acquisition of color images and near infrared by using a bi-spectral camera according to claim 14.

Documents

Application Documents

# Name Date
1 201817020623-IntimationOfGrant19-09-2023.pdf 2023-09-19
1 201817020623-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [01-06-2018(online)].pdf 2018-06-01
2 201817020623-PatentCertificate19-09-2023.pdf 2023-09-19
2 201817020623-STATEMENT OF UNDERTAKING (FORM 3) [01-06-2018(online)].pdf 2018-06-01
3 201817020623-PRIORITY DOCUMENTS [01-06-2018(online)].pdf 2018-06-01
3 201817020623-FORM 3 [01-03-2023(online)].pdf 2023-03-01
4 201817020623-FORM 3 [20-05-2022(online)].pdf 2022-05-20
4 201817020623-FORM 1 [01-06-2018(online)].pdf 2018-06-01
5 201817020623-FER.pdf 2021-10-18
5 201817020623-DRAWINGS [01-06-2018(online)].pdf 2018-06-01
6 201817020623-DECLARATION OF INVENTORSHIP (FORM 5) [01-06-2018(online)].pdf 2018-06-01
6 201817020623-ABSTRACT [27-08-2021(online)].pdf 2021-08-27
7 201817020623-COMPLETE SPECIFICATION [01-06-2018(online)].pdf 2018-06-01
7 201817020623-CLAIMS [27-08-2021(online)].pdf 2021-08-27
8 abstract.jpg 2018-07-16
8 201817020623-DRAWING [27-08-2021(online)].pdf 2021-08-27
9 201817020623-FER_SER_REPLY [27-08-2021(online)].pdf 2021-08-27
9 201817020623.pdf 2018-08-01
10 201817020623-OTHERS [27-08-2021(online)].pdf 2021-08-27
10 201817020623-Verified English translation (MANDATORY) [27-08-2018(online)].pdf 2018-08-27
11 201817020623-FORM 3 [13-08-2021(online)].pdf 2021-08-13
11 201817020623-Proof of Right (MANDATORY) [27-08-2018(online)].pdf 2018-08-27
12 201817020623-FORM-26 [27-08-2018(online)].pdf 2018-08-27
12 201817020623-Information under section 8(2) [13-08-2021(online)].pdf 2021-08-13
13 201817020623-FORM 3 [05-10-2020(online)].pdf 2020-10-05
13 201817020623-FORM 3 [27-08-2018(online)].pdf 2018-08-27
14 201817020623-Certified Copy of Priority Document (MANDATORY) [27-08-2018(online)].pdf 2018-08-27
14 201817020623-FORM 3 [16-01-2020(online)].pdf 2020-01-16
15 201817020623-FORM 18 [04-12-2019(online)].pdf 2019-12-04
15 201817020623-Power of Attorney-300818.pdf 2018-08-31
16 201817020623-FORM 3 [01-05-2019(online)].pdf 2019-05-01
16 201817020623-OTHERS-300818.pdf 2018-08-31
17 201817020623-OTHERS-300818-.pdf 2018-08-31
17 201817020623-Correspondence-300818.pdf 2018-08-31
18 201817020623-Correspondence-300818.pdf 2018-08-31
18 201817020623-OTHERS-300818-.pdf 2018-08-31
19 201817020623-FORM 3 [01-05-2019(online)].pdf 2019-05-01
19 201817020623-OTHERS-300818.pdf 2018-08-31
20 201817020623-FORM 18 [04-12-2019(online)].pdf 2019-12-04
20 201817020623-Power of Attorney-300818.pdf 2018-08-31
21 201817020623-Certified Copy of Priority Document (MANDATORY) [27-08-2018(online)].pdf 2018-08-27
21 201817020623-FORM 3 [16-01-2020(online)].pdf 2020-01-16
22 201817020623-FORM 3 [05-10-2020(online)].pdf 2020-10-05
22 201817020623-FORM 3 [27-08-2018(online)].pdf 2018-08-27
23 201817020623-FORM-26 [27-08-2018(online)].pdf 2018-08-27
23 201817020623-Information under section 8(2) [13-08-2021(online)].pdf 2021-08-13
24 201817020623-Proof of Right (MANDATORY) [27-08-2018(online)].pdf 2018-08-27
24 201817020623-FORM 3 [13-08-2021(online)].pdf 2021-08-13
25 201817020623-OTHERS [27-08-2021(online)].pdf 2021-08-27
25 201817020623-Verified English translation (MANDATORY) [27-08-2018(online)].pdf 2018-08-27
26 201817020623-FER_SER_REPLY [27-08-2021(online)].pdf 2021-08-27
26 201817020623.pdf 2018-08-01
27 201817020623-DRAWING [27-08-2021(online)].pdf 2021-08-27
27 abstract.jpg 2018-07-16
28 201817020623-CLAIMS [27-08-2021(online)].pdf 2021-08-27
28 201817020623-COMPLETE SPECIFICATION [01-06-2018(online)].pdf 2018-06-01
29 201817020623-ABSTRACT [27-08-2021(online)].pdf 2021-08-27
29 201817020623-DECLARATION OF INVENTORSHIP (FORM 5) [01-06-2018(online)].pdf 2018-06-01
30 201817020623-DRAWINGS [01-06-2018(online)].pdf 2018-06-01
30 201817020623-FER.pdf 2021-10-18
31 201817020623-FORM 3 [20-05-2022(online)].pdf 2022-05-20
31 201817020623-FORM 1 [01-06-2018(online)].pdf 2018-06-01
32 201817020623-PRIORITY DOCUMENTS [01-06-2018(online)].pdf 2018-06-01
32 201817020623-FORM 3 [01-03-2023(online)].pdf 2023-03-01
33 201817020623-STATEMENT OF UNDERTAKING (FORM 3) [01-06-2018(online)].pdf 2018-06-01
33 201817020623-PatentCertificate19-09-2023.pdf 2023-09-19
34 201817020623-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [01-06-2018(online)].pdf 2018-06-01
34 201817020623-IntimationOfGrant19-09-2023.pdf 2023-09-19

Search Strategy

1 _SearchStrategy-201817020623E_10-02-2021.pdf

ERegister / Renewals

3rd: 23 Nov 2023

From 07/12/2018 - To 07/12/2019

4th: 23 Nov 2023

From 07/12/2019 - To 07/12/2020

5th: 23 Nov 2023

From 07/12/2020 - To 07/12/2021

6th: 23 Nov 2023

From 07/12/2021 - To 07/12/2022

7th: 23 Nov 2023

From 07/12/2022 - To 07/12/2023

8th: 23 Nov 2023

From 07/12/2023 - To 07/12/2024

9th: 21 Nov 2024

From 07/12/2024 - To 07/12/2025

10th: 19 Nov 2025

From 07/12/2025 - To 07/12/2026