Abstract: The embodiments of the invention provide a method for determining a spatial coverage of a multi-sensor detection system comprising a plurality of sensors, the plurality of sensors being supported by supporting structures situated in a given geographical area, at least some of the supporting structures being movable, the method being characterised in that it comprises the steps of: - determining (201) a set of parameters representing the geographical area and comprising environmental data, data relating to the supporting structures and data relating to the sensors; - iteratively calculating (203) the detection probabilities for the plurality of sensors in the geographical area from at least some of the parameters; - determining (205) a three-dimensional representation of the spatial coverage of the detection system for the plurality of sensors; - determining (207) 3D shadow areas in the geographical area in which the detection probability is less than at least one given threshold; - selecting (209) the shadow areas located in at least one predefined security area of the system.
Title of the invention: Method for determining the spatial coverage of a detection system
[0001] Prior Art
The invention relates to detection systems, and in particular to the three-dimensional representation of the spatial coverage of a detection system.
[0003] Detection systems, such as sonar or radar, are used in many fields of application today. For example, sonars are used in the field of underwater acoustics to detect and locate objects underwater.
[0004] The detection systems can be used by various surveillance infrastructures (for example for the detection of submarines or objects on the seabed, in the field of fishing for the detection of schools of fish, in the field of cartography to map a geographical area, the bottom of the oceans and other bodies of water, or even in the field of the archaeologist, for example underwater and underwater archaeology).
[0005] The detection systems are equipped with antennas for transmitting and/or receiving signals.
[0006] Detection systems are conventionally used to detect threats in areas to be protected, for example in areas defined with respect to a sensitive infrastructure to be protected.
[0007] For example, in a sonar-type detection system, it is known to use an anti-submarine warfare system comprising elements for managing naval air fleet scenarios, modeling the spatial coverage of the sonar and tactical decision support tools.
[0008] For example, in "PC Etter, Underwater Acoustic Modeling and Simulation, CRC Press, 5th edition , 2018, Taylor & Francis Group", modeling and simulation technologies for tactical decision support in the context of anti-submarine warfare have been proposed to improve the efficiency of the management and deployment of deployed resources. Such a solution uses a network of geographically distributed components comprising Autonomous Undersea Vehicles (AUVs), surface ships, submarines, aircraft, and/or satellites. Such network components deploy distributed sensors.
[0009] The processing of the signals received by the detection system uses a signal processing step and an information processing step. The processing of the data collected by the various components of the detection system network makes it possible to represent the spatial coverage of the detection system from which an operator of the detection system can determine the dynamics and the trajectory of the target objects, the position of the objects targets in range, bearing and depth, as well as the nature of the target objects.
[0010] It is known to use a representation called 'Probabilities of detection or POD' ('Performance of the day' in English). A POD representation is a representation of the probabilities of detection which makes it possible to determine the spatial coverage of a detection system and to estimate the capacity of the detection system to detect target objects. Such a representation is generated from a data matrix of probabilities of detection as a function of the distance from the position of the detection system antenna and the depth between the surface and the seabed, using a color coding. Detection probabilities can be displayed graphically as data or as colored elements, with each color representing a probability range varying from 0 to 10%, 1 1 to 20%,
[001 1] The shapes obtained by means of this representation are relatively complex and non-uniform due to the non-linearity of wave propagation in the environment considered (for example the propagation of sound in water for a detection system of sonar type) and to the fact that the image which represents these detection probabilities only makes it possible to visualize a part of the scene considered. In complex environments, the final shape obtained in three dimensions (3D) can be relatively complex to interpret. Furthermore, this representation does not make it possible to highlight the dangerous zones representing a potential risk for the infrastructure using the detection system (platform or building to be protected for example). Nor does it make it possible to have a complete global view in three dimensions of the geographical area covered by the detection system. Furthermore, such a representation does not make it possible to cover a sufficiently large area with respect to the scale of the infrastructure using the detection system.
[0012] For sonar type detection systems, tactical decision support tools have been proposed. However, such tools are suboptimal. Indeed, the known decision support tools provide a modeling environment considering a single sensor at a time and over a distance linked to the maximum range of this sensor, which does not make it possible to have an overall view of the geographical area covered by the sonar and can expose the infrastructure to be protected (naval fleet for example) to the risk of attacks by threats that cannot be detected if they evolve in the areas not covered by the sensor in question.
[0013] In addition, in the existing solutions based on the representation of detection probabilities to assess the detection capacity of the detection system, the representation takes into account the probabilities of detection of a single sensor of the detection system at a time and does not does not make it possible to generate a visualization of the entire zone in a way that can be used by the operator of the detection system, which can limit the effectiveness of the naval fleet.
[0014] There is therefore a need for an improved method and device capable of determining the spatial coverage of a detection system in an efficient manner. To this end, the subject of the invention is a method for determining spatial coverage of a multi-sensor detection system comprising a plurality of sensors, the plurality of sensors being carried by supporting structures moving in a given geographical area, some at least supporting structures being mobile, the method being characterized in that it comprises the steps consisting in:
- determining a set of parameters representing the geographical area and comprising environmental data, data relating to the load-bearing structures and data relating to the sensors;
- Iteratively calculate the detection probabilities of the plurality of sensors in the geographical area from at least some of the parameters;
- determining a three-dimensional representation of the spatial coverage of the detection system for the plurality of sensors;
- determining 3D shadow areas of the geographical area in which the probability of detection is less than at least a given threshold;
- select the shadow zones located in at least one predefined security zone of the system.
[0015] According to certain embodiments, the method also comprises the determination of an intersection of the selected shadow zones forming an access channel to security zones.
[0016] According to certain embodiments, the detection system can be a sonar-type detection system, the environment being a maritime environment, said environment data comprising variable data in space and in time and representing a profile of the seabed in a geographic area
defined, a temperature profile of the volume of water, information defining the state of the sea, and information relating to sea noise, the data of load-bearing structures evolving in the marine domain defining one or more mobile platforms and/or one or more background objects.
According to certain embodiments, said calculation can be carried out periodically according to a chosen period or continuously.
According to certain embodiments, the spatial coverage of the detection system is represented according to a given position and viewing orientation, the step of determining a three-dimensional representation of the spatial coverage of the detection system comprising, for each sensor of said detection system, the steps consisting in:
Receiving a set of data comprising a probability of detection of said sensor in said geographical zone, the probability of detection being determined in a calculation zone associated with said sensor and included in said geographical zone, the data of said set comprising position data of the zone of calculation and dimension data of the calculation area,
- determining a 3D rendering of said probability of detection of said sensor from at least some of the probability data, the position data of the calculation zone and the dimension data.
According to some embodiments, the method further comprises a step of determining a main data structure having at least three dimensions from said probability data.
[0020] According to certain embodiments, the method may further comprise a step consisting in determining a depth rendering of the volume encompassing the calculation zone from the position data of the calculation zone, the dimension data, and viewing position and orientation.
[0021] According to certain embodiments, the 3D rendering may be volumetric, the method further comprising a step consisting in determining a volumetric rendering of the probability of detection according to at least one function in a given colorimetric space from said structure of main data and said depth rendering.
[0022] According to certain embodiments, the step of determining a volume rendering can comprise the steps consisting in:
- determining a grayscale volume rendering of the probability of detection from said 3D data structure and said depth rendering;
- determining a volume rendering in colors of the probability of detection from the volume rendering in gray levels.
According to certain embodiments, the step of determining a three-dimensional representation of the spatial coverage of the detection system comprises a step of generating the display of said three-dimensional representation of the spatial coverage, shadow zones inside security zones and access channels to security zones on a screen or in augmented reality mode or in virtual reality mode.
[0024] Advantageously, the embodiments of the invention allow real-time calculation of the detection probabilities associated with a multi-sensor detection system.
[0025] Advantageously, the embodiments of the invention allow a real-time representation of the probabilities of detection associated with a multi-sensor detection system for a real three-dimensional scene.
[0026] Advantageously, the embodiments of the invention can provide a mode of representation of the probabilities of detection in volume form, which allows a 3D display of the spatial coverage of the detection system in the entire geographical area considered. .
Advantageously, the embodiments of the invention allow a three-dimensional representation of the spatial coverage of the detection system on a display device such as a screen or an augmented reality device.
[0028] The representation of the spatial coverage of the detection system on an augmented reality device allows in particular a better perception of the representation in three dimensions, the sharing of the same tactical situation between several actors or control systems, and the visualization of a hologram on a tactical table without requiring the addition of additional screens in the operational environment considered.
[0029] The embodiments of the invention also make it possible to determine the actions to be implemented in response to the highlighting of shadow areas of non-coverage by the detection of the system located in security areas in the geographical area, or of an access channel constructed by the sum of the shadow zones of non-coverage leading to these safety zones, regardless of the complexity of the shape of these shadow zones or of these channels of 'access.
[0030] Brief description of the drawings
[0031] Other characteristics and advantages of the invention will become apparent with the aid of the following description given with reference to the appended drawings, given by way of example, and which represent, respectively:
[0032] [Fig.1] is an example of a process environment for determining spatial coverage of a sonar-type detection system, according to certain embodiments.
[0033] [Fig.2] is a flowchart representing a method for determining spatial coverage of a multi-sensor detection system, according to certain embodiments of the invention.
[0034] [Fig.3] is a flowchart representing a method for determining a three-dimensional representation of the spatial coverage of a multi-sensor detection system, according to certain embodiments of the invention.
[0035] [Fig.4] is a schematic view representing an accumulation ray tracing algorithm ('Ray Marching' in English language), according to certain embodiments of the invention.
[0036] [Fig.5] represents an example of the spatial coverage of a sonar-type detection system used for the detection of objects, obtained by using methods for displaying the spatial coverage of the prior art.
[0037] [Fig.6] is a schematic representation of the theoretical target detection zones in an example of application to the detection of objects in the underwater environment.
[0038] [Fig.7] is a schematic representation of realistic detection zones in an example of application to the detection of objects in the underwater environment.
[0039] [Fig.8] represents an example of on-screen display of the spatial coverage of a sonar-type multi-sensor detection system obtained from the volume rendering of the probabilities of detection, according to certain embodiments of the 'invention.
[0040] [Fig.9] represents an example of display in augmented reality mode of the spatial coverage of a sonar-type multi-sensor detection system obtained from the volume rendering of the probabilities of detection, according to certain embodiments of the invention.
[0041] [Fig.10] is a schematic view of a device for determining the spatial coverage of a detection system, according to certain embodiments of the invention.
[0042] Detailed Description
The embodiments of the invention provide a method for determining the spatial coverage of a multi-sensor detection system deploying a plurality of sensors and evolving in a given geographical area. The geographical zone includes a calculation zone, the calculation zone being a restricted geometrical zone included in the geographical zone such that it comprises all the non-zero calculated detection probabilities of the detection system and determined by a center and a reference frame (for example a Cartesian reference frame).
[0044] FIG. 1 represents an example of an environment 100 in which a method is used for determining a three-dimensional representation of the spatial coverage of a detection system operating in a geographical area.
The detection system can be any detection system carried by a support structure capable of moving in the geographical area, such as a radar or a sonar for example. In the example of FIG. 1, the detection system is a sonar carried by a carrier structure of the surface building type 101. The spatial coverage of the detection system is represented according to a given position and viewing orientation (the set position and orientation data is also called "viewpoint"). The data of the data set associated with a detection system carried by a support structure includes, by way of non-limiting example, position data of the calculation zone, dimension data of the calculation zone, and information about the supporting structure and the sensors.
The detection system can be used in various systems or infrastructures for the detection and localization of objects in the geographical area considered (for example in water for a sonar type detection system). For example, the detection system may comprise one or more acoustic detection devices (for example sonars 107, 108 and/or 109) used:
- for the detection of submarines or surface vessels and/or threats (eg mines) or objects lying on the seabed;
- for the detection of schools of fish, in the field of maritime and river navigation;
- in hydrography to map the bottom of the oceans and other bodies of water,
- in underwater and submarine archeology or in aquatic pollution sensors.
[0047] The detection system can also be used to locate the detected objects.
In the example of the environment 100 of FIG. 1 (representing an anti-submarine warfare device), other auxiliary elements can be deployed to implement preventive, defensive or offensive actions in depending on the detected object. The auxiliary elements can comprise for example a surface ship 101 equipped with one or more sonars, one or more maritime patrol aircraft, and one or more attack submarines. The sonars can for example include an active bow sonar 107, an active towed sonar 108, and an active dip sonar 109 deployed by the helicopter 103.
The various elements of the environment 100 can be controlled by an operator or a control system present for example in the surface ship 101, to monitor and protect the elements of the environment 100 against threats by implementing operations or actions. The operator or the control system can for example implement actions of the preventive, defensive or offensive type.
The following description of the embodiments of the invention will be made mainly with reference to a sonar-type detection system implementing at least one passive or active multi-sensor sonar to facilitate understanding of the embodiments of the invention, by way of non-limiting example. However, those skilled in the art will easily understand that the invention applies more generally to any detection system capable of detecting an object in water or in any other environment.
[0051] Figure 2 shows a method for determining spatial coverage of a multi-sensor detection system deploying a plurality of sensors carried by supporting structures moving in a given geographical area is illustrated. A carrier structure can carry one or more sensors. A sensor can operate in passive, mono-static, or multi-static mode.
[0052] According to certain embodiments, the supporting structures can be independent or non-independent.
[0053] In step 201, a set of parameters representing the given geographical area comprising environmental data (for example the underwater, marine or terrestrial environment), data relating to the load-bearing structures (for example the speed of the supporting structure) and data relating to the sensors of the multi-sensor detection system (for example for a sonar-type detection system, the data relating to the sensors may include the passive or active mode of the sonar, the frequency or frequencies of the sonar, maximum sonar range, sonar gain). The environment data may further include tactical data. Such parameters make it possible to define the location of the detection system, the environmental conditions relating to this location,
In a sonar-type detection system, the environmental data may include marine and/or land environment data that vary in space and time and represent:
- the profile of the seabed in a defined geographical area (which can be chosen with a high resolution, going for example up to 20cm); this profile can be supplemented by the type of bottom (combinations of mud, sand and/or rock);
- a water volume temperature and salinity profile, the water volume temperature and salinity influencing the propagation of acoustic waves;
- information defining the state of the sea (for example the height of the waves and their direction) and having an influence on the way in which the acoustic waves are reflected on the surface;
- information related to sea noise (e.g. rain, traffic noise, diffuse biological noise, seismic noise, etc.
According to certain embodiments, the data relating to the supporting structures evolving in the given geographical zone can be associated with the supporting structures evolving in the marine domain geographical zone such as one or more mobile platforms (for example, surface vessels , submarines, drones, airplanes, helicopters, drifting sonar buoys, biological entities, etc.) and/or one or more bottom objects (e.g. mines, static acoustic detection systems, wrecks, or any other bottom object having an influence on the detections). The load-bearing structures can have a predefined trajectory, be controlled by an operator of the detection system or by a control system. Supporting structures can have their own noise (due for example to machinery and/or flow noises in water), can be equipped with sensors of any type, and/or can also emit sounds. Bottom objects can be equipped with sensors, behavior automata and explosive charges.
In an exemplary embodiment, the tactical data can be associated with one or more threats to be detected (compound fleet for example), and the supporting structures can include one or more High Value Units. or HUV), one or more protective elements of HUV units arranged in surveillance zones or at Limit Lines of Approach (LLA).
In an exemplary embodiment, step 201 can be performed by a scenario generator in the environment considered (marine environment for example).
[0058] In step 203, a probability of detection can be calculated iteratively, for example at predefined time intervals or according to a fixed period or continuously, for the plurality of sensors of the multi-sensor detection system from certain or less of the set of parameters determined in step 201. A probability of detection, denoted p(D), denotes the probability that a real target echo will be detected when a real target echo exists. In a passive sonar type detection system, the probability of detection represents the probability of detecting a real target object that emits a given level of noise at different frequencies. In an active sonar type detection system, the probability of detection represents the probability of obtaining a viable echo thanks to an emission of acoustic waves by the active sonar. The probability that a real target echo is not detected while the target echo exists is 1 -p(D). The probability of false alarm, denoted p(FA), designates the probability that a false echo (or spurious echo) will be detected, i.e. the probability that there is a false detection of an echo which is actually noise.
The detection probability calculated for the plurality of sensors can be determined by combining the elementary detection probabilities of the different sensors.
[0060] The probabilities of detection can change at any time given the profile of the environment in the vicinity of the support structure (bottom under the platform considered for example), the movements of the support structure and the environmental conditions. In one embodiment, the probabilities of detection can be updated periodically according to a chosen period of less than one minute, for example every 30 seconds (or more quickly depending on the computing power available), taking into account the speeds of the structures carriers (the speed is low for example for naval platforms) and the fact that the probabilities of detection vary according to certain quantities relating to the environment (for example,
[0061] Step 203 may comprise a step of saving the data representing the detection probabilities calculated for the plurality of sensors in a file. In one embodiment, the detection probability data may be represented by a probability data matrix. The position of the center of this matrix (center of the reference frame of the matrix) can then be located at the level of the position of the supporting structure of the detection system.
[0062] In step 205, a three-dimensional representation of the spatial coverage of the detection system can be determined for the plurality of sensors. The three-dimensional representation of the spatial coverage according to the embodiments of the invention makes it possible to represent the probabilities of detection for the set of sensors of the multi-sensor detection system and over the entire geographical area, taking into account the parameters representing the geographical area. For example, in an application of the invention to a sonar-type detection system, the representation of the spatial coverage in three dimensions can take into account the relief of the seabed, the load-bearing structures evolving in the geographical zone in the marine environment. , and tactical zones determined in step 201 . The relief of the environment considered, such as for example the relief of a seabed, can be imported from maps such as nautical charts used for example for navigation of the S57 type. The relief can undergo a treatment of smoothing and lighting with a treatment of shadows to be able to optimize the representation of the complete zone. The load-bearing structures can be associated with a fairly faithful 3D model given the scale of the objects. The position of the bearing structures can be updated periodically, such as for example every 200 ms, the period being chosen or determined to be adapted to fluid movements of the bearing structures taking account of their speeds. can be imported from charts such as nautical charts used for example for S57 type navigation. The relief can undergo a treatment of smoothing and lighting with a treatment of shadows to be able to optimize the representation of the complete zone. The load-bearing structures can be associated with a fairly faithful 3D model given the scale of the objects. The position of the bearing structures can be updated periodically, such as for example every 200 ms, the period being chosen or determined to be adapted to fluid movements of the bearing structures taking account of their speeds. can be imported from charts such as nautical charts used for example for S57 type navigation. The relief can undergo a treatment of smoothing and lighting with a treatment of shadows to be able to optimize the representation of the complete zone. The load-bearing structures can be associated with a fairly faithful 3D model given the scale of the objects. The position of the bearing structures can be updated periodically, such as for example every 200 ms, the period being chosen or determined to be adapted to fluid movements of the bearing structures taking account of their speeds. The load-bearing structures can be associated with a fairly faithful 3D model given the scale of the objects. The position of the bearing structures can be updated periodically, such as for example every 200 ms, the period being chosen or determined to be adapted to fluid movements of the bearing structures taking account of their speeds. The load-bearing structures can be associated with a fairly faithful 3D model given the scale of the objects. The position of the bearing structures can be updated periodically, such as for example every 200 ms, the period being chosen or determined to be adapted to fluid movements of the bearing structures taking account of their speeds.
[0063] According to some embodiments, step 205 may comprise a display of the three-dimensional representation of the spatial coverage of the detection system on a rendering device such as a screen, an augmented reality rendering device or yet another virtual reality rendering device. The display in augmented reality mode advantageously makes it possible to improve the perception of the representation in three dimensions thanks to holograms, the sharing of the same tactical situation between several operators equipped with rendering devices allowing them to visualize the holograms with points viewpoints, and transfer the hologram to a tactical table without the need to add additional screens to the already cluttered operational environment.
[0064] At step 207, the three-dimensional representation of the spatial coverage can be analyzed to determine 3D shadow areas of the geographic area, the shadow areas being areas in which the probability of detection is lower. at least a given threshold.
[0065] In step 209, the shadow zones can be located in at least one predefined security zone of the system. Security zones can be selected based on strategic criteria.
The security zone with respect to a structure to be protected by the multi-sensor detection system can be substantially a sphere centered on the infrastructure to be protected, the radius of the sphere being for example of the order of 30 km.
[0067] A 3D shadow zone may for example be a non-detection basin.
According to certain embodiments, step 209 can comprise the determination of an intersection of the selected shadow zones forming an access channel to security zones, a channel designating the sum of the shadow zones leading to the security zone. A risk zone is a gray area within the safety zone.
According to certain embodiments, the three-dimensional representation of the spatial coverage of the multi-sensor sonar in step 205 can be based on the determination of a 3D rendering of the detection probabilities calculated for the set of sensors at step 203. In one embodiment, the step of determining the 3D (three-dimensional) rendering of the probability of detection may further take into account the visualization data (position and orientation data).
[0070] The determination of the 3D rendering of detection probabilities can be performed using 3D image synthesis techniques to convert the raw data of detection probabilities into a 3D image to generate a display of the 3D image on the device. chosen rendering (an on-screen 3D device, augmented reality device or virtual reality device).
The 3D representation of the spatial coverage can advantageously be superimposed with a representation of the geographic area on the rendering device, which allows a 3D display of the spatial coverage of the detection system in the geographic area considered.
The creation of a synthetic image can be broken down into three main steps: the geometric modeling of the objects of the scene to be represented, the creation of the scene, and the rendering.
[0073] Geometric modeling makes it possible to define the geometric properties of objects either by a mathematical representation from the definitions or mathematical systems which describe them, or by a representation by construction tree by representing complex objects as a composition of objects. simple called primitives, or by a representation that represents an object by materializing the limit between the interior of the object and the exterior of the object by a series of geometric elements linked together (usually triangles).
The scene creation step makes it possible to define the appearance of the objects and to determine the non-geometric parameters of the scene to be displayed. The appearance of objects is defined by determining surface or volume properties of objects including optical properties, color and texture. The non-geometric parameters of the scene include the position and type of the light sources, the position and the orientation of the visualization of the scene forming the chosen point of view.
The assignment of a color to an object according to the embodiments of the invention is based on the use of a colorimetric space taking into account the opacity. For example, the data conversion can be carried out by using the RGBA color coding format which is an extension of the RGB format, taking into account the notion of transparency. In such an example, each pixel displayed on the image represents a vector of four components, comprising a first R value representing a red color value (Red), a second G value representing a green color value (Green), a third value B representing a blue color value (Blue), and a fourth value A representing a transparency component (or alpha component). Using the geometric representation in Euclidean space, each pixel is associated with three geometric dimensions (x,y,z) in an XYZ frame of reference comprising a width x (or abscissa of the frame of reference), a depth y (or ordinate of the frame of reference) and a height z (or side of the frame of reference). The XYZ datum is defined by the compute area and can be centered at the center of the compute area.
As used here, a 3D texture refers to a data structure allowing the representation of a synthetic image.
A rendering process refers to a computer process consisting in converting the model of the objects into a displayable image on the chosen rendering device.
The determination of the spatial coverage of the detection system in three dimensions from the probabilities of detection can use the transformation (or
conversion) of the raw data of the detection probabilities into a structure usable by the rendering calculation functions.
[0079] FIG. 3, a flowchart representing the step 205 of determining a representation of the spatial coverage of a three-dimensional multi-sensor detection system evolving in a geographical area according to certain embodiments.
The steps of FIG. 3 are carried out for each sensor of the detection system.
[0081] In step 301, data from all the data comprising a probability of the sensor in the geographical area are received, the probability of detection being determined in a calculation area associated with the sensor considered and included in said geographical area. . The data of the data set further comprises the dimension of the calculation zone associated with the sensor considered, and position data of the calculation zone. In particular, data from the data set can be read or extracted. At step 301, visualization data (position and orientation) can also be received.
A 3D rendering of the probability of detection of the sensor is then determined from at least some of the probability data, the position data of the calculation zone and the dimension data.
In one embodiment, to generate the 3D rendering, the method may include a step 303 in which a main data structure having at least three dimensions (particularly 3D or 4D), also conventionally called 3D Texture' is determined at based on probability of detection data. The data structure can for example be a matrix.
[0084] In one embodiment, the set of input data can also comprise the input resolution, the input resolution corresponding to the distance between two points of the calculation zone. Step 303 of determining the data structure can then include the steps of:
- generating an auxiliary data structure from the detection probability data, the auxiliary data structure having dimensions defined from the probability, and from the input resolution, and
- determining the 3D texture using a conversion of the auxiliary structure data into colorimetric data.
[0085] In step 305, a depth rendering of the bounding volume of the calculation zone can be determined from the position and dimension data of the calculation zone, and from the position and viewing orientation data.
[0086] In particular, the step 305 of determining the depth rendering may comprise the determination of a first depth image of a cube encompassing the data structure and of a second depth image of the rear face of the cube. . The depth of the cube encompassing the main data structure (3D Texture) representing the distance of the surface of the cube from the position and viewing orientation (Z-depth or 'Z-depth' in Anglo-Saxon language), the depth rendering comprising the first depth image and the second depth image. A depth image includes a set of surfaces associated with position information. In one embodiment, the second depth image can be determined as the depth image of the cube whose normals have been flipped,
In one embodiment, the 3D rendering may be a volume rendering. In such an embodiment, in step 307, a volume rendering of the probability of detection is determined according to at least one function in a given colorimetric space from the main data structure (3D Texture) and from the rendering of depth.
In one embodiment, the functions used to determine the volume rendering can comprise at least one transfer function defined from a minimum probability bound, a maximum probability bound, a colorimetry bound minimum, and a maximum colorimetry limit.
Each item of information of a depth image associated with geometric dimensions x, y, z can be defined in a colorimetric space, for example in the RGBA coding format.
[0090] In one embodiment, the step 307 of determining a volume rendering can include:
- a step 3071 of determining a volume rendering in gray levels of the probabilities of detection from the 3D texture and the depth rendering and
- a step 3073 of determining a volume rendering in colors of the probabilities of detection from the volume rendering in gray levels.
[0091] In one embodiment, the volume rendering in gray levels can be determined from the 3D texture, the first depth image, and the second depth image by applying an algorithm (or technique) for calculating volume rendering of type ray throwing by accumulation ('Ray Marching' in Anglo-Saxon language).
A Ray Marching type algorithm is illustrated in FIG. 4 in an embodiment using RGBA coding. A Ray Marching-type algorithm is based on geometric optics to simulate the path of light energy in the image to be displayed. A projection plane 403, placed in front of a viewpoint 401, represents the image displayed (that is to say the volume rendering in gray levels). Each point of the projection plane 403 corresponds to a pixel of the volume rendering in gray levels. The implementation of step 307 by applying a Ray Marching type algorithm according to the embodiments of the invention can comprise the generation of a ray 407 (defined by an origin point and a direction) for each pixel of the desired grayscale volume rendering image. The ray can be sampled at regular steps inside the volume 405 and the values in the colorimetric space (RGBA colors for example) of the various pixels thus calculated can be summed in proportion to their Alpha transparency contribution. The algorithm traverses, in the direction of the projection plane 403, the oriented volumes of the results representing the probabilities of detection, while collecting the probability values step by step. The value of a displayed pixel constitutes the result of a function (for example transfer function) of the collected values. in the direction of the projection plane 403, the oriented volumes of the results representing the detection probabilities, while stepwise collecting the probability values. The value of a displayed pixel constitutes the result of a function (for example transfer function) of the collected values. in the direction of the projection plane 403, the oriented volumes of the results representing the detection probabilities, while stepwise collecting the probability values. The value of a displayed pixel constitutes the result of a function (for example transfer function) of the collected values.
For each point of the projection plane, each point corresponding to a pixel of the volume rendering, the Ray Marching type algorithm can be applied to calculate a probability value accumulated by a ray 407 starting from the front of the bounding cube the 3D texture to the back of the cube.
In one embodiment, the application of a Ray Marching-type algorithm at step 307 may include the operations consisting in, for each pixel of the volume rendering and for each update of the detection probabilities:
- determining the 3D positions of departure and arrival of the ray 407 from the read values of the colors of the textures of the first depth image and of the second depth image for the selected pixel;
- calculate a step-by-step displacement vector for a given or predetermined number of iterations, from the radius direction displacement vector 407, the distance of the step of the iterations determined by dividing the total distance by the number d iterations, and multiplying the direction displacement vector by the step distance.
For each iteration, the determination of a color representing a probability value for a selected pixel can include the operations consisting of:
- update the position of the path of the ray 407 by adding the displacement vector;
- associate a gray level value with the probability color corresponding to the updated position in the 3D texture, such a color representing a gray level associated with the corresponding probability: for example, a 'black' value can be associated with a probability close to zero (Ό') and a 'blank' value can be associated with a probability close to one ('1');
- determine the Alpha transparency component of the color value;
- apply a function in the colorimetric space (for example transfer);
- add the determined color to the color of the pixel resulting from the algorithm.
[0096] In one embodiment, the transfer function can be a software function configured to perform a linear or non-linear interpolation, between a minimum color limit and a maximum color limit, the colorimetric space in which is determined the volume rendering representing a detection probability comprised between a minimum probability bound and a maximum probability bound.
[0097] For example, at step 3073, a color volume rendering of the detection probabilities is determined from the gray level volume rendering of the detection probabilities determined at sub-step 3071, the color volume rendering can be determined by applying a color transfer function to the grayscale volume rendering, the color transfer function using the minimum probability bound, the maximum probability bound, the minimum color bound, and the maximum color bound. The color transfer function can then, for example, execute a linear interpolation, between the minimum color limit and the maximum color limit,
In another embodiment, the 3D rendering can be surface. The method then comprises, as an alternative to steps 303, 305 and 307, a step 311 consisting in determining a surface rendering from the detection probabilities, the position data of the calculation zone, the dimension data, of at least a threshold value of detection probabilities, and viewing position and orientation.
[0099] In particular, the step 31 1 for determining the surface rendering may comprise the generation of polygonal objects from the three-dimensional data matrix to approximate at least one iso-surface conducted from at least one given detection probability threshold ('Marching cube' in Anglo-Saxon language).
[0100] The functions for calculating the probabilities of detection and for determining the texture can be software functions executed by a processing unit or processor. The functions implemented in the rendering process can be software programs called “shaders” or “shaders” in English, executed by a graphics processing unit.
The method according to the various embodiments of the invention can be implemented in a detection system in 'mission preparation' mode or in 'online' mode. The 'mission preparation' mode makes it possible to optimize the movement of the detection system by simulating a scenario based on a future mission. The 'online' mode consists in implementing the method by coupling the detection system to the tactical links. In this 'online' mode, the actual positions, speeds, and headings of the various load-bearing structures as well as the use of their various sensors can be used to update the parameters defining the environment and making it possible to generate a representation. in real time.
[0102] FIG. 5 represents an example of spatial coverage of a sonar type detection system used in an anti-submarine warfare device obtained by using a prior art detection probability display technique . This representation is generated from a matrix of probabilities of detection of the detection system as a function of the distance from the antenna of the detection system and the depth between the surface and the seabed, using with an encoding colors to assess the sonar's ability to detect a threat. In the example of figure 5, zone 1 which corresponds to a zone associated with 100% probability of detecting a threat (an enemy platform for example) and zone 2 corresponds to a zone associated with 0% probability of detecting a threat.
[0103] Figure 6 is a schematic representation of the theoretical target detection zones in an anti-submarine warfare device using circles to identify the intended protection of a naval fleet against a possible threat and Figure 7 is a schematic representation of realistic detection areas obtained using state-of-the-art spatial coverage display methods. Both figures show that the actual global detection zone is far from the desired theoretical global detection zone, which does not allow effective preventive surveillance to protect the naval fleet and can represent a danger in the presence of threats. possible.
FIG. 7 represents an example of screen display of spatial coverage of a multi-sensor sonar type detection system obtained according to an embodiment with volume rendering of detection probabilities of the detection system. As illustrated by FIG. 7, the three-dimensional representation of the spatial coverage according to the invention makes it possible to display an overall view of the complete area in three dimensions, to cover a very large area on the scale of the naval fleet, and to highlight non-sounding danger zones and non-detection pits.
FIG. 8 represents an example of display in augmented reality mode of a multi-sensor spatial coverage obtained according to the volume rendering modes of the probabilities of detection of the sonar according to the embodiments of the invention.
The invention also provides a device for determining spatial coverage of a multi-sensor detection system comprising a plurality of sensors, said plurality of sensors being carried by supporting structures moving in a given geographical area, some at least bearing structures being mobile, characterized in that the device is configured for:
- determining a set of parameters representing said geographical area and comprising environmental data, data relating to said load-bearing structures and data relating to said sensors;
- iteratively calculating the probabilities of detection of said plurality of sensors in the geographical area from at least some of said parameters;
- determining a three-dimensional representation of the spatial coverage of the detection system for the plurality of sensors;
- determining 3D shadow areas of said geographical area in which said probability of detection is less than at least a given threshold;
- select the shadow zones located in at least one predefined security zone of the system.
[0107] The invention further provides a computer program product for determining spatial coverage of a multi-sensor detection system comprising a plurality of sensors, said plurality of sensors being carried by support structures moving in a geographical area. given, at least some of the supporting structures being mobile, characterized in that the computer program product comprising computer program code instructions which, when executed by one or more processors cause the processor or processors to:
- determining a set of parameters representing said geographical area and comprising environmental data, data relating to said load-bearing structures and data relating to said sensors;
- iteratively calculating the probabilities of detection of said plurality of sensors in the geographical area from at least some of said parameters;
- determining a three-dimensional representation of the spatial coverage of the detection system for the plurality of sensors;
- determining 3D shadow areas of said geographical area in which said probability of detection is less than at least a given threshold;
selecting the shadow zones located in at least one predefined security zone of the system.
[0108] FIG. 10 represents the device 4000 for determining a three-dimensional representation of the spatial coverage of a detection system (for example of the sonar type) evolving in a geographical area (sea scene for example) from the data set comprising the detection probabilities of the detection system. The data set can be saved, for example, in a memory 430 or in a mass storage device 420. The device 4000 can be any type of computer device or system referred to as a computer. The device 4000 can comprise at least one processing unit 4010 configured to determine a 3D rendering of the detection probabilities from at least some of the probability data, compute area position data and dimension data. In one embodiment, the 3D (three-dimensional) rendering of the probabilities of detection can further be determined from the visualization data (position and orientation data).
The device 4000 may further include a memory 430, a database 420 forming part of a mass storage memory device, an I/O input/output interface 470, and a Man-Machine interface. 410 to receive inputs or return outputs from/to a detection system operator. The interface 410 can be used for example to configure or parameterize various parameters or functions used by the representation determination method according to certain embodiments of the invention, such as the configuration of the volume rendering of the probabilities of detection. External resources may include, but are not limited to, servers, databases, mass storage devices, edge devices, cloud network services,
non-volatile memory, static random access memory (SRAM), dynamic random access memory (DRAM), flash memory, cache memory or any other device capable of storing information. Mass storage device 420 may include data storage devices such as a hard disk, optical disk, tape drive, volatile or non-volatile solid state circuit, or any other device capable of storing informations. A database may reside on mass memory storage device 420. Mass storage device 420 may include data storage devices such as a hard disk, optical disk, tape drive, volatile or non-volatile solid state circuit, or any other device capable of storing informations. A database may reside on mass memory storage device 420. Mass storage device 420 may include data storage devices such as a hard disk, optical disk, tape drive, volatile or non-volatile solid state circuit, or any other device capable of storing informations. A database may reside on mass memory storage device 420.
[01 1 1 ] The processing unit 4010 can operate under the control of an operating system 440 which resides in the memory 430. The operating system 440 can manage the computing resources in such a way that the program code of the computer, integrated in the form of one or more software applications, such as the application 450 which resides in the memory 430, can have instructions executed by the processing unit 4010. The device 4000 can comprise a 4030 graphics processing unit implemented on a graphics card, on a motherboard, or in a central processing unit. The graphics processing unit may generate a display of the 3D rendering on a display device.
[01 12] In general, the routines executed to implement the embodiments of the invention, whether they are implemented within the framework of an operating system or of a specific application, of a component , program, object, module, or sequence of instructions, or even a subset thereof, may be referred to as “computer program code” or simply “ program code”. Program code typically includes computer-readable instructions which reside at various times in various memory and storage devices in a computer and which, when read and executed by one or more processors in a computer, cause the computer to perform the operations necessary to perform the operations and/or elements specific to the various aspects of the embodiments of the invention. The instructions of a program, readable by computer, to carry out the operations of the embodiments of the invention can be, for example, the assembly language, or else a source code or an object code written in combination with one or several programming languages.
[01 13] The invention is not limited to the embodiments described above by way of non-limiting example. It encompasses all the variant embodiments which may be envisaged by those skilled in the art.
CLAIMS
1. Method for determining spatial coverage of a multi-sensor detection system comprising a plurality of sensors, said plurality of sensors being carried by supporting structures moving in a given geographical area, at least some of the supporting structures being mobile, the method being characterized in that it comprises the steps of:
- determining (201) a set of parameters representing said geographical area and comprising environmental data, data relating to said load-bearing structures and data relating to said sensors;
- calculating (203) iteratively the probabilities of detection of said plurality of sensors in the geographical area from at least some of said parameters;
determining (205) a three-dimensional representation of the spatial coverage of the sensing system for the plurality of sensors;
- determining (207) 3D shadow areas of said geographical area in which said probability of detection is less than at least a given threshold;
- select (209) the shadow zones located in at least one predefined security zone of the system.
2. Method according to claim 1, characterized in that it further comprises the determination of an intersection of the selected shadow areas forming an access channel to security areas.
3. Method according to claim 1, characterized in that said detection system is a sonar type detection system, said environment being a maritime environment, said environment data comprising variable data in space and in time and representing a profile of the seabed in a defined geographical area, a temperature profile of the volume of water, information defining the state of the sea, and information relating to the noise of the sea, said data of load-bearing structures evolving in said marine domain defining one or more mobile platforms and/or one or more bottom objects.
4. Method according to any one of the preceding claims, characterized in that said calculation (203) is carried out periodically according to a chosen period or continuously.
5. Method according to any one of the preceding claims, characterized in that the spatial coverage of the detection system is represented according to a given position and viewing orientation, the step (205) of determining a three-dimensional representation the spatial coverage of the detection system comprising, for each sensor of said detection system, the steps consisting in:
Receiving a set of data comprising a probability of detection of said sensor in said geographical zone, the probability of detection being determined in a calculation zone associated with said sensor and included in said geographical zone, the data of said set comprising position data of the zone of calculation and dimension data of the calculation area,
- determining a 3D rendering of said probability of detection of said sensor from at least some of the probability data, the position data of the calculation zone and the dimension data.
6. Method according to claim 5, characterized in that it further comprises a step consisting in determining (303) a main data structure having at least three dimensions from said probability data.
7. Method according to one of claims 5 and 6, characterized in that it further comprises a step consisting in determining (305) a depth rendering of the volume encompassing the calculation zone from the position data of the zone calculation, dimension data, and viewing position and orientation.
8. Method according to claims 2 and 3, characterized in that said 3D rendering is volumetric and in that the method further comprises a step consisting in determining (307) a volumetric rendering of the probability of detection according to at least one function in a given color space from said main data structure and said depth rendering.
9. Method according to claim 8, characterized in that said step of determining a volume rendering (307) comprises the steps of:
- determining a grayscale volume rendering of the probability of detection from said 3D data structure and said depth rendering;
- determining a volume rendering in colors of the probability of detection from the volume rendering in gray levels.
10. Method according to one of the preceding claims, characterized in that the step of determining (205) a three-dimensional representation of the spatial coverage of the detection system comprises a step of generating the display of said representation in three dimensions of spatial coverage, shadow zones inside security zones and access channels to security zones on a screen or in augmented reality mode or in virtual reality mode.
1 1 . Device for determining spatial coverage of a multi-sensor detection system comprising a plurality of sensors, said plurality of sensors being carried by supporting structures moving in a given geographical area, at least some of the supporting structures being mobile, characterized in that that the device is configured for:
- determining a set of parameters representing said geographical area and comprising environmental data, data relating to said load-bearing structures and data relating to said sensors;
- iteratively calculating the probabilities of detection of said plurality of sensors in the geographical area from at least some of said parameters;
- determining a three-dimensional representation of the spatial coverage of the detection system for the plurality of sensors;
- determining 3D shadow areas of said geographical area in which said probability of detection is less than at least a given threshold;
- select the shadow zones located in at least one predefined security zone of the system.
12. Computer program product for determining spatial coverage of a multi-sensor detection system comprising a plurality of sensors, said plurality of sensors being carried by supporting structures moving in a given geographical area, at least some of the supporting structures being mobile, characterized in that the computer program product comprising computer program code instructions which when executed by one or more processors cause the processor or processors to:
- determining a set of parameters representing said geographical area and comprising environmental data, data relating to said load-bearing structures and data relating to said sensors;
- iteratively calculating the probabilities of detection of said plurality of sensors in the geographical area from at least some of said parameters;
- determining a three-dimensional representation of the spatial coverage of the detection system for the plurality of sensors;
- determining 3D shadow areas of said geographical area in which said probability of detection is less than at least a given threshold;
- select the shadow zones located in at least one predefined security zone of the system.
| # | Name | Date |
|---|---|---|
| 1 | 202217000209-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [03-01-2022(online)].pdf | 2022-01-03 |
| 2 | 202217000209-STATEMENT OF UNDERTAKING (FORM 3) [03-01-2022(online)].pdf | 2022-01-03 |
| 3 | 202217000209-PRIORITY DOCUMENTS [03-01-2022(online)].pdf | 2022-01-03 |
| 4 | 202217000209-FORM 1 [03-01-2022(online)].pdf | 2022-01-03 |
| 5 | 202217000209-DRAWINGS [03-01-2022(online)].pdf | 2022-01-03 |
| 6 | 202217000209-DECLARATION OF INVENTORSHIP (FORM 5) [03-01-2022(online)].pdf | 2022-01-03 |
| 7 | 202217000209-COMPLETE SPECIFICATION [03-01-2022(online)].pdf | 2022-01-03 |
| 8 | 202217000209.pdf | 2022-01-08 |
| 9 | 202217000209-FORM-26 [11-01-2022(online)].pdf | 2022-01-11 |
| 10 | 202217000209-FORM 3 [11-01-2022(online)].pdf | 2022-01-11 |
| 11 | 202217000209-Proof of Right [25-05-2022(online)].pdf | 2022-05-25 |
| 12 | 202217000209-FORM 3 [07-12-2022(online)].pdf | 2022-12-07 |
| 13 | 202217000209-FORM 18 [05-04-2023(online)].pdf | 2023-04-05 |
| 14 | 202217000209-FER.pdf | 2023-12-08 |
| 15 | 202217000209-PETITION UNDER RULE 137 [31-05-2024(online)].pdf | 2024-05-31 |
| 16 | 202217000209-OTHERS [31-05-2024(online)].pdf | 2024-05-31 |
| 17 | 202217000209-FER_SER_REPLY [31-05-2024(online)].pdf | 2024-05-31 |
| 18 | 202217000209-DRAWING [31-05-2024(online)].pdf | 2024-05-31 |
| 19 | 202217000209-COMPLETE SPECIFICATION [31-05-2024(online)].pdf | 2024-05-31 |
| 20 | 202217000209-CLAIMS [31-05-2024(online)].pdf | 2024-05-31 |
| 1 | SearchHistory(2)E_28-11-2023.pdf |