Sign In to Follow Application
View All Documents & Correspondence

Method For Assisting In The Detection Of Elements, And Associated Device And Platform

Abstract: The present invention relates to a method for assisting in the detection of elements in an environment, the method comprising, at each point in time, the steps of: - acquiring a first image of the environment having a first resolution and a first field of view, - detecting elements captured in the first image, - classifying each detected element to obtain an initial classification of the element, - acquiring at least a second image of at least one of the detected elements having a second resolution and a second field of view, the second resolution being greater than the first resolution, the second field of view being more restricted than the first field of view, and - re-assessing, for each detected element captured in the second image, the classification of the element according to the second image to obtain a re-assessed classification of the element.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
25 February 2022
Publication Number
16/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

THALES
Tour Carpe Diem Place des Corolles Esplanade Nord 92400 COURBEVOIE

Inventors

1. LE MEUR, Alain
Thales LAS France SAS - Elancourt 2, avenue Gay-Lussac 78990 ELANCOURT
2. BECHE, Arnaud
Thales LAS France SAS - Elancourt 2, avenue Gay-Lussac 78990 ELANCOURT
3. HENAFF, Gilles
Thales LAS France SAS - Elancourt 2, avenue Gay-Lussac 78990 ELANCOURT
4. SEGUINEAU DE PREVAL, Benoît
Thales LAS France SAS - Elancourt 2, avenue Gay-Lussac 78990 ELANCOURT

Specification

CLAIMS

1. Method for assisting in the detection of fixed and mobile elements (E) in an environment, the method comprising at each instant the steps of:

- acquisition of a first image (IM1), the first image (IM1) being a panoramic image of the environment, the first image (IM1) having a first resolution and imaging the environment according to a first field of vision,

- detection, where applicable, of fixed and mobile elements (E) imaged on the first image (IM1) by a classifier trained according to a database of images of elements (E) and by a detector of movements using first images acquired at previous instants,

- classification, by the classifier, of each detected element (E) to obtain an initial classification of the element (E),

- acquisition of at least one second image (IM2), the second image (IM2) imaging at least one of the detected elements (E), the second image (IM2) having a second resolution and imaging the environment according to a second field of view, the second resolution being greater than the first resolution, the second field of view being more restricted than the first field of view, and

- re-evaluation, by the classifier, for each detected element (E) imaged on the second image (IM2), of the classification of the element (E) according to the second image (IM2) to obtain a re-evaluated classification of the element (E).

2. Method according to claim 1, in which each classification is associated with a probability representative of a level of confidence in the classification.

3. Method according to claim 2, in which for each detected element (E), when the reassessed classification is different from the initial classification and the classification probability associated with the reassessed classification is greater than or equal to a predetermined threshold, the method comprises a step of recording the image of the detected element (E) resulting from the first image (IM1) and/or from the second image (IM2), as well as from the reassessed classification of the element (E) for a subsequent update of the database by adding the image(s) of the detected element (E) and training the classifier with the updated database.

4. Method according to claim 2 or 3, in which, for each detected element (E), when the classification probability associated with the last classification of the element (E) is strictly lower than a predetermined threshold, the method comprises a verification step, by an operator or by an additional classification tool, of the last classification of the element (E), the verification step comprising, if necessary, the correction of the last classification of the element (E ).

5. Method according to claim 4, in which, for each detected element (E), the method further comprises a step of updating the database by adding the image of the detected element (E ) resulting from the first image (IM1) and/or the second image (IM2), as well as from the verified classification of the element (E), and advantageously from the training of the classifier with the updated database .

6. Method according to claim 5, in which each image comprises a signature, the updating step comprising verifying the signature of the image and disregarding the image when the signature of the image does not conform to a predetermined signature.

7. Method according to any one of claims 1 to 6, in which each image comprises pixels, the method comprising a step of displaying at least one image from among the first image (IM1) and the second image or images ( IM2), the pixels detected by the classifier as corresponding to the element(s) detected (E) on the displayed image being highlighted on the displayed image.

8. Method according to any one of claims 1 to 7, in which the detected elements (E) are chosen from the list consisting of: a human, an animal, a weapon system, a land vehicle, a maritime vehicle and an aerial vehicle.

9. Device (12) to aid in the detection of fixed and mobile elements (E) in an environment, the device (12) comprising:

- an image acquisition system (14) configured to implement the steps of acquisition of a first image (IM1) and acquisition of at least a second image (IM2) of the method according to one any of claims 1 to 8, and

- a computer (16) interacting with a classifier trained according to a database of images of elements (E) and a motion detector, the computer (16) being configured to implement the detection steps , classification and reassessment of the method according to any one of claims 1 to 8.

10. Platform (10), in particular a mobile platform such as a vehicle, comprising a device (12) according to claim 9.

DESCRIPTION

TITLE: Process for assisting in the detection of associated elements, device and platform

The present invention relates to a method for assisting in the detection of fixed and mobile elements in an environment. The present invention also relates to an associated detection aid device, as well as a platform comprising such a device.

In the military field, combat vehicle crews are exposed to many threats. Such threats come in particular from dismounted combatants, land or air vehicles and land or air drones.

In order to identify such threats, certain reconnaissance missions consist of going as close as possible to the enemy to detect his device, in particular the type of combat gear used, and precisely determine the volume and the units to which they belong. of the enemy. The challenge is to see without being seen, and transmit as much tactical information as possible to the command post.

Nevertheless, the threats are more or less visible, depending on the level of camouflage that the environment can provide, which can be urban, rural, mountainous or even forest.

In addition, armored vehicles provide the vehicle crew with a field of vision that can be very small.

In addition, the workload, as well as the level of fatigue of the personnel are likely to lead to a loss of vigilance vis-à-vis the environment outside the vehicle.

All of this contributes to the fact that the crews are exposed to threats that they have not systematically seen and anticipated.

There is therefore a need for a detection aid method which allows better detection of the elements of an environment, and in particular of threats in a military context.

To this end, the subject of the invention is a method for assisting in the detection of fixed and mobile elements in an environment, the method comprising at each instant the steps of:

- acquisition of a first image, the first image being a panoramic image of the environment, the first image having a first resolution and imaging the environment according to a first field of view,

- detection, where appropriate, of fixed and mobile elements imaged on the first image by a classifier trained according to a database of element images and by a motion detector using first images acquired at previous instants ,

- classification, by the classifier, of each element detected to obtain an initial classification of the element,

- acquisition of at least a second image, the second image imaging at least one of the detected elements, the second image having a second resolution and imaging the environment according to a second field of vision, the second resolution being greater than the first resolution, the second field of view being more restricted than the first field of view, and

- re-evaluation, by the classifier, for each detected element imaged on the or a second image, of the classification of the element according to the second image to obtain a re-evaluated classification of the element.

According to other advantageous aspects of the invention, the detection aid method comprises one or more of the following characteristics, taken in isolation or in all technically possible combinations:

- each classification is associated with a probability representative of a level of confidence in the classification,

- for each element detected when the reassessed classification is different from the initial classification and the probability of classification associated with the reassessed classification is greater than or equal to a predetermined threshold, the method comprises a step of recording the image of the detected element resulting from the first image and/or from the second image, as well as from the re-evaluated classification of the element for a subsequent update of the database by adding the image(s) of the detected element and training of the classifier with the updated database,

- for each element detected, when the probability of classification associated with the last classification of the element is strictly lower than a predetermined threshold, the method comprises a verification step, by an operator or by an additional classification tool, of the last classification of the element, the verification step including, if necessary, the correction of the last classification of the element,

- for each detected element, the method further comprises a step of updating the database by adding the image of the detected element from the first image and/or the second image, as well as the verified classification of the element, and advantageously training the classifier with the updated database,

- each image includes a signature, the update step includes verification of the signature of the image and the non-taking into account of the image when the signature of the image does not conform to a predetermined signature,

- each image comprises pixels, the method comprising a step of displaying at least one image from among the first image and the second image(s), the pixels detected by the classifier as corresponding to the element(s) detected on the displayed image being highlighted on the displayed image,

- the elements detected are chosen from the list consisting of: a human, an animal, a weapon system, a land vehicle, a sea vehicle and an air vehicle.

The invention also relates to a device for assisting in the detection of fixed and mobile elements in an environment, the device comprising:

- an image acquisition system configured to implement the steps of acquisition of a first image and acquisition of at least a second image of the method according to any one of the claims as described above.

- a computer interacting with a classifier trained according to a database of images of elements and a motion detector, the computer being configured to implement the steps of detection, classification and re-evaluation of the method as than previously described.

The invention also relates to a platform, in particular a mobile platform such as a vehicle, comprising a device as described above.

The invention also relates to a method for assisting in the detection of elements in an environment, the method comprising the steps of:

- simultaneous acquisition of a first image and a second image imaging the same portion of the environment, the first image being an image in a first spectral band, the second image being an image in a second spectral band, the second band spectral band being different from the first spectral band,

- detection, if necessary, of elements imaged in the first image by a first classifier trained according to a first database of images of elements in the first spectral band,

- detection, if necessary, of elements imaged in the second image by a second classifier trained according to a second database of images of elements in the second spectral band,

- classification, for each image, of the elements detected by the corresponding classifier,

- comparison of the classification of the detected elements obtained for the first image and for the second image, and

- when the classification of at least one of the detected elements is different for the first image and the second image or when an element has been detected only for one of the two images, storing the first and the second image and corresponding classifications for subsequent updating of at least one of the databases, and subsequent training of the corresponding classifier with the updated database.

According to other advantageous aspects of the invention, the detection aid method comprises one or more of the following characteristics, taken in isolation or in all technically possible combinations:

- one of the first spectral band and of the second spectral band is between 380 nanometers and 780 nanometers and the other of the first spectral band and of the second spectral band is between 780 nanometers and 3 micrometers or between 3 micrometers and 5 micrometers or between 8 micrometers and 12 micrometers,

- the method comprises a step of updating at least one of the databases according to the image or images and the corresponding classifications stored,

- each classification is associated with a probability representative of a level of confidence in the classification, for each element detected, when the probability associated with the classification obtained for the first image is greater than or equal to a predetermined threshold and the probability associated with the classification obtained for the second image is strictly below the predetermined threshold, the updating step comprising updating the second database by adding the image of the detected element from the second image, as well as the classification obtained for the element imaged on the first image and the training of the second classifier with the second updated database,

- each classification is associated with a probability representative of a level of confidence in the classification, for each element detected, when the probability associated with the classification obtained for each of the first and second images is less than a predetermined threshold, update step comprising the verification, by an operator or by an additional classification tool, of the classification(s) of the element detected and, if necessary, the correction of the classification(s), the step of e update comprising the update of at least one database by adding the image of the detected element taken from the image acquired in the spectral band of the database, as well as the verified classification of the element, and the training of the corresponding classifier with the updated database,

- the first and the second image are panoramic images of the environment having the same first resolution and the same first field of view, the method further comprising the steps of:

- acquisition of at least a third image for at least one of the elements detected on one of the first or of the second image, the third image being an image in the first or the second spectral band, the third image having a second resolution and a second field of view, the second resolution being greater than the first resolution, the second field of view being more restricted than the first field of view,

- re-evaluation, by the corresponding classifier, for each detected element imaged on the or a third image, of the classification of the element according to the third image to obtain a re-evaluated classification of the element,

each classification being associated with a probability representative of a level of confidence in the classification,

for each element detected, when the probability associated with the reassessed classification is greater than or equal to a predetermined threshold and the probability associated with the classification obtained from the first and/or the second image is strictly less than the predetermined threshold, the storing step comprising storing said first and/or second image, the third image and the corresponding classifications for later updating of the database in the same spectral band as the third image, and training the classifier corresponding with the updated database.

- each image comprises a signature, the update step comprising the verification of the signature of each acquired image and the disregard of the image when the signature of the image does not conform to a signature predetermined,

- the elements detected are chosen from the list consisting of: a human, an animal, a weapon system, a land vehicle, a sea vehicle and an air vehicle.

The invention also relates to a device for assisting in the detection of elements in an environment, the device comprising:

- an image acquisition system configured to implement the acquisition step of the method as described above, and

- a computer interacting with a first classifier trained according to a first database of images of elements and with a second classifier trained according to a second database of images of elements, the calculator being configured to implement the steps of detection, classification, comparison, and storage of the method as described above.

The invention also relates to a platform, in particular a mobile platform such as a vehicle, comprising a device as described above.

Other characteristics and advantages of the invention will appear on reading the following description of embodiments of the invention, given by way of example only, and with reference to the drawings which are:

- [Fig 1] figure 1, a schematic representation of a platform comprising a device for assisting in the detection of elements,

- [Fig 2] figure 2, a flowchart of an example of implementation of a method for assisting in the detection of elements, and

- [Fig 3] figure 3, a flowchart of another example of implementation of a method for assisting in the detection of elements.

A platform 10 is represented in FIG. 1. In this example, the platform 10 is a land vehicle, in particular an all-terrain type vehicle. Such a vehicle is, for example, controlled by an operator inside the vehicle. As a variant, such a vehicle is, for example, remote-controlled from another vehicle.

Advantageously, the platform 10 is a vehicle of the military type, such as an assault tank. Such a military vehicle is in particular adapted to include a plurality of weapons and to protect the operator(s) installed inside the vehicle.

Alternatively, the platform 10 is any other mobile platform, such as an air vehicle (plane, helicopter, drone or satellite) or a maritime vehicle (naval vessel).

Still as a variant, the platform 10 is a fixed platform, such as a turret or control tower.

The platform 10 includes a device 12 for assisting in the detection of elements E in an environment. The device 12 is suitable for helping an operator to detect elements E in the environment.

Preferably, the elements E are chosen from the list consisting of: a human, an animal, a system weapon, a land vehicle, a sea vehicle and an air vehicle.

More precisely, for human-type E-elements, for example, a distinction is made between an unarmed human, a human armed with a light weapon and a human armed with a heavy weapon.

For elements E of the land vehicle type, a distinction is made, for example, between an unarmed civilian vehicle (car, truck, motorcycle), an armed civilian vehicle (all-terrain vehicle with turret) and a military vehicle (tank, logistics truck , troop transport vehicle, reconnaissance vehicle), or even a military vehicle of a specific type (Leclerc tank, Challenger tank, T72 tank).

For the elements E of the air vehicle type, a distinction is for example made between a flying element defined of the airplane type, a flying element defined of the helicopter type, a flying element defined of the drone type and a flying element defined of the armed drone type. In addition, a distinction is also made between a defined flying element of bird (animal) type and an air vehicle.

For elements E of the maritime vehicle type, a distinction is made, for example, between an unarmed civilian ship, an armed civilian ship, a military ship of a specific type and a submarine.

The elements E to be detected are both fixed (for example: stationary vehicle) and mobile (for example: human or moving vehicle).

In a military context, element E indicates the presence of a potential threat for the operators of the platform 10 that the device 12 makes it possible to classify.

In the example illustrated by figure 1, two elements E are represented: a first element E1 of the unarmed human type and a second element E2 of the human type armed with a light weapon. In this example, the environment is forest type.

The detection aid device 12 comprises an image acquisition system 14, a computer 16 and a display device 18.

The image acquisition system 14 is capable of capturing images of part of the environment of the platform 10.

The image acquisition system 14 is suitable for capturing a set of images at a low rate so as to obtain a series of still images as with a camera or at a higher rate so as to acquire enough images to form a video stream.

For example, the image acquisition system 14 is capable of supplying a video stream, for example, in HD-SDI video format. The acronym HD refers to high definition. HD-SDI (High Definition - Serial digital interface) or high definition serial digital interface is a protocol for transporting or broadcasting different digital video formats. The HD-SDI protocol is defined by the ANSI/SMPTE 292M standard. The HD-SDI protocol is particularly suitable for real-time image processing.

As a variant, the image acquisition system 14 is capable of supplying a video stream in another standard, for example, a video stream in CoaxPress format or a video stream in compressed Ethernet format, for example in the H264 or H265 standard.

Advantageously, the image acquisition system 14 is suitable for taking color images for daytime vision and/or for taking infrared images for night vision and/or for taking pictures. images allowing night and day camouflage.

In a first embodiment, the image acquisition system 14 comprises at least two entities 14A and 14B illustrated in FIG. 1:

- A first entity 14A comprising at least one panoramic type sensor capable of acquiring panoramic images of the environment. The images acquired by this sensor have a first resolution and image the environment according to a first field of vision. In the example illustrated by Figure 1, the first entity 14A is fixed.

- A second entity 14B comprising at least one non-panoramic type sensor capable of acquiring non-panoramic images of the environment. The images acquired by this sensor have a second resolution and image the environment according to a second field of view. The second resolution is higher than the first resolution. The second field of vision is more restricted than the first field of vision. Advantageously, the second entity 14B is orientable (for example in elevation and bearing) so as to adjust the orientation of the sensor. For example, as illustrated by FIG. 1, the second entity 14B is mounted on a member 19, such as a cupola, making it possible to orient the sensor. As a variant or in addition, the second entity 14B is a pan tilt zoom camera.

A sensor is said to be of the panoramic type when the sensor is able to provide images of the environment over 360°. The elevation is then, for example, between 75° and -15°. Such a panoramic sensor is, for example, formed by a single camera, such as a fisheye camera. Alternatively, such a panoramic sensor is formed byr a set of cameras.

In the first embodiment, the panoramic type sensor and the non-panoramic type sensor are suitable for acquiring images of the environment in at least one spectral band, for example, the visible band. The visible band is a spectral band between 380 nanometers (nm) and 780 nm.

In a second embodiment, the acquisition system 14 comprises at least two sensors:

- a sensor capable of acquiring images of a portion of the environment in a first spectral band.

- a sensor capable of acquiring images of the same portion of the environment in a second spectral band, the second spectral band being different from the first spectral band.

For example, one of the first spectral band and the second spectral band is between 380 nm and 780 nm (visible) and the other of the first spectral band and the second spectral band is between 780 nm and 3 micrometers (pm) (near infrared) and/or between 3 pm and 5 pm (band II infrared) and/or between 8 pm and 12 pm (band III infrared). In the second embodiment, the two sensors are both of the same type (in order to acquire the same field of view), that is to say either panoramic or non-panoramic.

For example, when the two sensors are both of the panoramic type, the two sensors are, for example, integrated into the same entity which is, for example, identical to the first entity 14A of the first embodiment. When the two sensors are both of the non-panoramic type, the two sensors are, for example, integrated into the same entity which is, for example, identical to the second entity 14B of the first embodiment.

In a third embodiment, the acquisition system 14 comprises the two sensors of the first embodiment, as well as at least one of the following components:

- an additional panoramic sensor able to acquire panoramic images of the same portion of the environment and according to the same field of vision as the panoramic sensor but in a different spectral band, and/or

- an additional non-panoramic sensor able to acquire non-panoramic images of the same portion of environment and according to the same field of view as the non-panoramic sensor but in a different spectral band.

One of the spectral bands is, for example, between 380 nm and 780 nm and the other of the spectral bands is, for example, between 780 nm and 3 μm and/or between 3 μm and 5 μm and/or between 8 p.m. and 12 p.m.

For example, the additional panoramic sensor is integrated into the same first entity 14A as the panoramic sensor of the first embodiment. The additional non-panoramic sensor is, for example, integrated into the same second entity 14B as the non-panoramic sensor of the second embodiment.

The computer 16 is configured in particular to operate a classifier and, if necessary, a motion detection tool and to collect images from the acquisition system 14 in order to be able to feed an image database which will be used, off mission , to perfect the classifier.

Computer 16 is, for example, a processor. The computer 16 comprises, for example, a data processing unit, memories, an information carrier reader and a man/machine interface, such as a keyboard or a display.

The computer 16 is, for example, interacting with a computer program product which comprises an information medium.

The information medium is a medium readable by the computer 16, usually by the data processing unit of the computer 16. The readable information medium is a medium suitable for storing electronic instructions and capable of being coupled to a computer system bus. By way of example, the readable information medium is a diskette or floppy disk (from the English name floppy disk), an optical disk, a CD-ROM, a magneto-optical disk, a ROM memory, a RAM memory, an EPROM memory, an EEPROM memory, a magnetic card or an optical card. On the data carrier is stored the computer program product comprising program instructions.

Advantageously, at least one classifier and, where applicable, at least one motion detection tool are stored on the information medium. Alternatively, the classifier and the motion detection tool are stored in a memory of the computer 16.

The classifier, also called classification tool in the remainder of the description, is configured to detect and classify elements E. The classification consists in assigning a class to each element E detected. The possible classes are, for example, general classes such as for example: the “human” class, the “animal” class, the “weapon system” class, the se “land vehicle”, the class “sea vehicle” and the class “air vehicle”. Advantageously, the classes are more precise classes, for example, conforming to the distinctions between the elements which have been described above.

Advantageously, the classifier has been trained beforehand, off mission, according to an image database comprising images of the elements E to be detected. The classifier notably comprises at least one element detection algorithm E and an element classification algorithm E. The classifier is, for example, a neural network having been previously “trained” via the image database comprising images elements E to be detected. Advantageously, the learning or "training" phase is not carried out in the vehicle, but outside the mission.

In a particular embodiment (particularly second and third embodiments), two classifiers are used: a first classifier trained beforehand, off-mission, according to a first database of images in a first spectral band and a second classifier trained beforehand, off mission, according to a second database of images in a second spectral band. One of the spectral bands is, for example, between 380 nm and 780 nm and the other of the spectral bands is, for example, between 780 nm and 3 μm and/or between 3 μm and 5 μm and/or between 8 p.m. and 12 p.m.

The motion detection tool, also called motion detector in the remainder of the description, is configured to detect mobile elements E according to images acquired at previous instants. The motion detection tool notably includes a motion detection algorithm. The motion detection tool is, for example, an algorithm based on the optical flow method.

or each image database comprises in particular images of elements E associated with a classification, the elements E being for example imaged in a particular environment or context, or even under a particular angle of view. The classifications stored in the database were, for example, obtained via an operator or another classification tool, in particular during the post-mission counting of a previous reconnaissance mission.

The computer program can be loaded onto the data processing unit and is adapted to bring about the implementation of methods for assisting in the detection of elements E when the computer program is implemented on the unit processing of the computer 16 as will be described in the following description.

As a variant, at least part of the computer 16 is integrated with one or more of the sensors of the acquisition system 14 to form what are called smart sensors.

As a variant, at least part of the computer 16 is remote from the platform 10, the data transmissions taking place for example wirelessly, if the computing power of the processor integrated into the platform is too low.

The display device 18 is, according to the example of FIG. 1, a screen capable of displaying images to the operator, for example the images coming from the acquisition system 14 or the same images after processing by the computer 16.

A first operating mode of the device 12 will now be described with reference to the implementation by the computer 16 of a method for assisting in the detection of elements E in an environment. Such an implementation is illustrated by the flowchart of FIG. 2. In addition, in this first mode of operation, the

acquisition system 14 of device 12 conforms to the first embodiment described previously.

The method includes a step 100 of acquiring a first image IM1. The first image IM1 is acquired in real time, that is to say at the image acquisition frequency (for example 100 Hertz (Hz) or 50 Hz or 25 Hz or 12.5 Hz), the acquisition time of the image, also called integration time, being of the order of a few milliseconds (for example from 100 microseconds to 10 milliseconds depending on the luminosity of the scene and the sensitivity of the detector).

The first image IM1 is a panoramic image of the environment. The first image IM1 comprises a set of pixels. The first image IM1 has a first resolution and images the environment according to a first field of vision. Image resolution is defined as the number of pixels per inch in the image (1 inch = 2.54 centimeters). The field of view of an image acquisition system, also called visual field or angle of view, corresponds to the total area of ​​space that the acquisition system perceives when the system fixes a point.

To simplify, it is assumed that at this acquisition step 100, a single first image IM1 is acquired. The reasoning is obviously the same if a fluse “land vehicle”, the class “sea vehicle” and the class “air vehicle”. Advantageously, the classes are more precise classes, for example, conforming to the distinctions between the elements which have been described above.

Advantageously, the classifier has been trained beforehand, off mission, according to an image database comprising images of the elements E to be detected. The classifier notably comprises at least one element detection algorithm E and an element classification algorithm E. The classifier is, for example, a neural network having been previously “trained” via the image database comprising images elements E to be detected. Advantageously, the learning or "training" phase is not carried out in the vehicle, but outside the mission.

In a particular embodiment (particularly second and third embodiments), two classifiers are used: a first classifier trained beforehand, off-mission, according to a first database of images in a first spectral band and a second classifier trained beforehand, off mission, according to a second database of images in a second spectral band. One of the spectral bands is, for example, between 380 nm and 780 nm and the other of the spectral bands is, for example, between 780 nm and 3 μm and/or between 3 μm and 5 μm and/or between 8 p.m. and 12 p.m.

The motion detection tool, also called motion detector in the remainder of the description, is configured to detect mobile elements E according to images acquired at previous instants. The motion detection tool notably includes a motion detection algorithm. The motion detection tool is, for example, an algorithm based on the optical flow method.

or each image database comprises in particular images of elements E associated with a classification, the elements E being for example imaged in a particular environment or context, or even under a particular angle of view. The classifications stored in the database were, for example, obtained via an operator or another classification tool, in particular during the post-mission counting of a previous reconnaissance mission.

The computer program can be loaded onto the data processing unit and is adapted to bring about the implementation of methods for assisting in the detection of elements E when the computer program is implemented on the unit processing of the computer 16 as will be described in the following description.

As a variant, at least part of the computer 16 is integrated with one or more of the sensors of the acquisition system 14 to form what are called smart sensors.

As a variant, at least part of the computer 16 is remote from the platform 10, the data transmissions taking place for example wirelessly, if the computing power of the processor integrated into the platform is too low.

The display device 18 is, according to the example of FIG. 1, a screen capable of displaying images to the operator, for example the images coming from the acquisition system 14 or the same images after processing by the computer 16.

A first operating mode of the device 12 will now be described with reference to the implementation by the computer 16 of a method for assisting in the detection of elements E in an environment. Such an implementation is illustrated by the flowchart of FIG. 2. In addition, in this first mode of operation, the

acquisition system 14 of device 12 conforms to the first embodiment described previously.

The method includes a step 100 of acquiring a first image IM1. The first image IM1 is acquired in real time, that is to say at the image acquisition frequency (for example 100 Hertz (Hz) or 50 Hz or 25 Hz or 12.5 Hz), the acquisition time of the image, also called integration time, being of the order of a few milliseconds (for example from 100 microseconds to 10 milliseconds depending on the luminosity of the scene and the sensitivity of the detector).

The first image IM1 is a panoramic image of the environment. The first image IM1 comprises a set of pixels. The first image IM1 has a first resolution and images the environment according to a first field of vision. Image resolution is defined as the number of pixels per inch in the image (1 inch = 2.54 centimeters). The field of view of an image acquisition system, also called visual field or angle of view, corresponds to the total area of ​​space that the acquisition system perceives when the system fixes a point.

To simplify, it is assumed that at this acquisition step 100, a single first image IM1 is acquired. The reasoning is obviously the same if a fluse “land vehicle”, the class “sea vehicle” and the class “air vehicle”. Advantageously, the classes are more precise classes, for example, conforming to the distinctions between the elements which have been described above.

Advantageously, the classifier has been trained beforehand, off mission, according to an image database comprising images of the elements E to be detected. The classifier notably comprises at least one element detection algorithm E and an element classification algorithm E. The classifier is, for example, a neural network having been previously “trained” via the image database comprising images elements E to be detected. Advantageously, the learning or "training" phase is not carried out in the vehicle, but outside the mission.

In a particular embodiment (particularly second and third embodiments), two classifiers are used: a first classifier trained beforehand, off-mission, according to a first database of images in a first spectral band and a second classifier trained beforehand, off mission, according to a second database of images in a second spectral band. One of the spectral bands is, for example, between 380 nm and 780 nm and the other of the spectral bands is, for example, between 780 nm and 3 μm and/or between 3 μm and 5 μm and/or between 8 p.m. and 12 p.m.

The motion detection tool, also called motion detector in the remainder of the description, is configured to detect mobile elements E according to images acquired at previous instants. The motion detection tool notably includes a motion detection algorithm. The motion detection tool is, for example, an algorithm based on the optical flow method.

or each image database comprises in particular images of elements E associated with a classification, the elements E being for example imaged in a particular environment or context, or even under a particular angle of view. The classifications stored in the database were, for example, obtained via an operator or another classification tool, in particular during the post-mission counting of a previous reconnaissance mission.

The computer program can be loaded onto the data processing unit and is adapted to bring about the implementation of methods for assisting in the detection of elements E when the computer program is implemented on the unit processing of the computer 16 as will be described in the following description.

As a variant, at least part of the computer 16 is integrated with one or more of the sensors of the acquisition system 14 to form what are called smart sensors.

As a variant, at least part of the computer 16 is remote from the platform 10, the data transmissions taking place for example wirelessly, if the computing power of the processor integrated into the platform is too low.

The display device 18 is, according to the example of FIG. 1, a screen capable of displaying images to the operator, for example the images coming from the acquisition system 14 or the same images after processing by the computer 16.

A first operating mode of the device 12 will now be described with reference to the implementation by the computer 16 of a method for assisting in the detection of elements E in an environment. Such an implementation is illustrated by the flowchart of FIG. 2. In addition, in this first mode of operation, the

acquisition system 14 of device 12 conforms to the first embodiment described previously.

The method includes a step 100 of acquiring a first image IM1. The first image IM1 is acquired in real time, that is to say at the image acquisition frequency (for example 100 Hertz (Hz) or 50 Hz or 25 Hz or 12.5 Hz), the acquisition time of the image, also called integration time, being of the order of a few milliseconds (for example from 100 microseconds to 10 milliseconds depending on the luminosity of the scene and the sensitivity of the detector).

The first image IM1 is a panoramic image of the environment. The first image IM1 comprises a set of pixels. The first image IM1 has a first resolution and images the environment according to a first field of vision. Image resolution is defined as the number of pixels per inch in the image (1 inch = 2.54 centimeters). The field of view of an image acquisition system, also called visual field or angle of view, corresponds to the total area of ​​space that the acquisition system perceives when the system fixes a point.

To simplify, it is assumed that at this acquisition step 100, a single first image IM1 is acquired. The reasoning is obviously the same if a flu

Documents

Application Documents

# Name Date
1 202217010243.pdf 2022-02-25
2 202217010243-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [25-02-2022(online)].pdf 2022-02-25
3 202217010243-STATEMENT OF UNDERTAKING (FORM 3) [25-02-2022(online)].pdf 2022-02-25
4 202217010243-PRIORITY DOCUMENTS [25-02-2022(online)].pdf 2022-02-25
5 202217010243-POWER OF AUTHORITY [25-02-2022(online)].pdf 2022-02-25
6 202217010243-FORM 1 [25-02-2022(online)].pdf 2022-02-25
7 202217010243-DRAWINGS [25-02-2022(online)].pdf 2022-02-25
8 202217010243-DECLARATION OF INVENTORSHIP (FORM 5) [25-02-2022(online)].pdf 2022-02-25
9 202217010243-COMPLETE SPECIFICATION [25-02-2022(online)].pdf 2022-02-25
10 202217010243-Proof of Right [19-07-2022(online)].pdf 2022-07-19
11 202217010243-FORM 3 [19-07-2022(online)].pdf 2022-07-19
12 202217010243-FORM 3 [25-07-2022(online)].pdf 2022-07-25
13 202217010243-FORM 18 [27-07-2023(online)].pdf 2023-07-27
14 202217010243-FER.pdf 2025-03-03
15 202217010243-FORM 3 [30-04-2025(online)].pdf 2025-04-30
16 202217010243-FORM 3 [02-06-2025(online)].pdf 2025-06-02
17 202217010243-OTHERS [03-09-2025(online)].pdf 2025-09-03
18 202217010243-FER_SER_REPLY [03-09-2025(online)].pdf 2025-09-03
19 202217010243-DRAWING [03-09-2025(online)].pdf 2025-09-03
20 202217010243-CLAIMS [03-09-2025(online)].pdf 2025-09-03

Search Strategy

1 SearchHistoryE_07-03-2024.pdf