Abstract: The adaptive iterative method for passive target ranging includes a sampler unit (204) that processes a data obtained from IMU sensors. On receiving the sampled data, running mean and variance is calculated by a running mean and variance calculator unit (206, 208). This running mean and variance along with data samples are used to calculate feedback weight by a feedback weight calculator unit (214). A feedback calculator unit (216) calculates a feedback factor by calculating the difference between test height and terrain height at a given iteration and multiplying it with feedback weight. At the end of each iteration, the feedback factor is compared with a predefined threshold. If the feedback factor is more than the predefined threshold the feedback is provided to a test angle calculator (202), otherwise the adaptive iterative method is said to be converged and the range of the target is calculated.
DESC:TECHNICAL FIELD
[0001] The present invention relates generally to the estimation of target range during surveillance. The invention, more particularly, relates to systems and methods for estimation of passive target range using photogrammetric.
BACKGROUND
[0002] Surveillance or intelligence gathering is the first and one of the most significant steps in any military operation. In most scenarios, the surveillance equipment is deployed in a high altitude ground station or an airborne platform searching for any potential targets on the ground. In such scenarios, estimating target range is crucial for providing complete situational awareness and for deploying appropriate countermeasures, if required.
[0003] Not only in surveillance but even in a combat mission, the attacking platform must know the target's precise location for the attack to be successful. Hence, to estimate target ranges, the probing platforms are generally equipped with RADARs and Laser Range Finders (LRFs). However, being passive equipment they are not preferred to be used in stealth missions as they may reveal the presence of a probing platform. Hence in stealth aircraft, only passive sensors liken Forward Looking Infrared (FLIR) cameras can be used to estimate target location. However, the images from a FLIR camera cannot provide information about the range of targets on its own. Therefore, there is a requirement to address this problem.
[0004] There are many conventional solutions that exist, for example, one of a conventional solution is proposed in US 3961851 titled as “Passive stereovision range finder” which discloses a method where a passive mobile range finder and stereo viewer having at least two image converting television cameras each mounted apart, controlled remotely supported with synchronous drives for azimuth and elevation control. In this method, three cameras are mounted on three mobile stations for estimating range. Any two cameras will be selected to function simultaneously based on the baseline with respect to a particular object. This method is suitable for ranges up to 40 km with an error of +/- 110 Meters.
[0005] Another conventional solution is proposed in US4954837 titled “Terrain aided passive range estimation” discloses a method where the range of the target is estimated passively by using stored digital terrain data as a parameter in extended Kalman filter. The system accurately locates the ground based targets using platform mounted passive sensors. The Kalman filter algorithm fuses angular target measurements from available sensors along with stored digital terrain data to obtain recursive least square error estimates of target location. An iterative algorithm calculates the slant range to the intersection of the target's line of sight vector with the digital terrain data base. This calculated slant range is used as input to the Kalman filter to complement the measured elevation and azimuth inputs. The Kalman filter uses the calculated range measurement to update the target location estimate as a function of terrain slope. The system arrives at a rapid solution by using the stored digital terrain data to provide estimates of range.
[0006] In another conventional solution proposed in US5187485 titled “Passive ranging through global positioning system” discloses a method where range of a target is obtained passively from GPS signals, which are scattered by the target. In this method, the position of the observing unit is determined by GPS. The main idea of this work is if the delay time from the reflected signal of the target is measured then the position of the target can be measured. This method relates to a bi-static radar system, where GPS satellites are used as radiation sources.
[0007] In another conventional solution proposed in US5479360 titled “Target passive ranging without an ownship maneuver” discloses a method for estimating parameters of a target with respect to a platform which includes assigning predetermined initial values for these parameters, wherein the target parameters are part of models having respective model probabilities. The model may include Kalman filter. The models are updated in response to measured parameters of the target. After a predetermined number of updates, the model having highest updated model probability may be selected as the winning model. The updated values of the targeted parameters in the winning model at the time of winner selection are used as the estimated values for the parameters sought to be estimated. The parameters to be estimated may include range and velocity of the target and the measured parameter include azimuth and elevation.
[0008] In another conventional solution proposed in US5642299 titled “Electro-optical range finding and speed detection system” discloses a method, where a passive optical speed and distance measuring system includes a pair of camera lenses positioned along a common baseline at predetermined distance and controlled by an operator to capture images of a target at different times. Video signal processor determines the location of the offset positions and calculates the range to the target by solving the trigonometry of the triangle formed by the two camera lenses and the target. Once the range to the target is known at two different times the speed of the target is calculated.
[0009] In another conventional solution proposed in WO2008/075335 titled “Airborne photogrammetric imaging system and method” discloses a photogrammetric imaging system and method that includes a camera array of two or more cameras directed coplanarly at different angles for angular separation. A forward motion correction tiltable support to which the camera array is coupled is provided, adapted to be tilted at a predetermined time and angular velocity for compensating for blur caused by forward motion of platform on which the system is mounted. A roll mechanism for rolling the camera array is provided, allowing sweeping motion, and a lateral motion correction optical mechanism is also provided for lateral motion correction for each of the cameras compensating for the lateral roll.
[0010] In another conventional solution proposed in US7839490B2 titled “Single-aperture passive rangefinder and method of determining a range” discloses a method of determining range with a single aperture passive range finder. The passive range finder consists of an imaging system configured to form a first image that includes a point of interest at a first position and a second image at a second position that includes the point of interest. This system also consists of a processor associated with the imaging system and configured to acquire and store the first and second image and determine the range to a point of interest based on a separation between the first position and second position and the position of the point of interest relative to the virtual axes of the imaging system at the first position and at the second position.
[0011] However, none of the conventionally available solutions are providing an accurate estimation of the target range. Thus, there is a need for a system and method for accurate estimation of target range using photogrammetric techniques.
SUMMARY
[0012] This summary is provided to introduce an adaptive iterative method and system for passive target ranging. This summary is neither intended to identify essential features of the present invention nor is it intended for use in determining or limiting the scope of the present invention.
[0013] For, example various embodiments herein may include one or more systems to integrate an adaptive iterative method with adaptive intervals between iterations and mitigating the noise for a passive target ranging. The present invention is an adaptive iterative method and system for finding a range of single and multiple targets within the field of view of the single-camera by passive means. A single image from the single camera and the information of terrain is combined to estimate the target range using a photogrammetric technique.
[0014] The adaptive iterative method to find the range of the target disclosed in the present invention includes steps of calculating, by a test angle calculator unit, a test angle (?i); selecting, by a sampler unit, a plurality of data obtained from one or more inertial measurements unit sensors (IMU). The sampler further, processes a set of data to obtain height (hi) of a camera and depression angle (?di) of a line of sight. The sampler data is send to a running mean and variance calculator unit.
[0015] Eliminating, by the running mean and variance calculator, the noise present in the sampler data received from the sampler unit, and sending said data further to a feedback weight calculator unit.
[0016] Calculating, by a feedback weight calculator unit, an adaptive weight using the received sampler data and the received running mean and variance calculator data. The calculated feedback weight is then send further to a feedback calculator unit.
[0017] The method disclosed in the present invention further includes calculating, by a test height computation unit, a test height (ht) at each iteration. Extracting, by a terrain height extraction unit, a terrain height (hd) from a digital elevation model (DEM) data. The test height computation data and the terrain height extraction data are send to the feedback calculator unit. Further, calculating, by the feedback calculator unit, a feedback factor using the test height (ht) value, the terrain height (hd) value, and the adaptive feedback weight. The said feedback factor is calculated by first taking the difference between the test height value and the terrain height value at the given iteration, and then multiplying the difference value with the feedback weight.
[0018] The method disclosed in the present invention further includes comparing, by the comparator unit, the value of the feedback factor with a predefined threshold value. When compared, if the value of the feedback factor is more than the predefined threshold value, the feedback is provided to next iteration, and if the value of the feedback factor is less than the predefined threshold value the adaptive iterative method is said to be converged and the range of the target is calculated.
[0019] In an embodiment of the present invention, the method further comprises updating, by a test angle calculator unit, a test angle (?i) at each iteration using feedback from a previous iteration. The value of the test angle (?i) is calculated by using feedback obtained from the comparator at each iteration.
[0020] In an embodiment of the present invention, the feedback factor calculation is determined by first calculating the difference between the test height (ht) and the terrain height (hd) values at the given iteration; and then multiplying the difference value with the adaptive feedback weight.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[0021] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and modules.
[0022] Fig.1 illustrates a schematic diagram depicting a general case scenario of target ranging, according to an embodiment of the present invention.
[0023] Fig. 2 illustrates a block diagram depicting a general process flow of an adaptive iterative method, according to an exemplary implementation of the present invention.
[0024] Fig. 3 illustrates a block diagram depicting running mean and variance calculator unit, according to an embodiment of the present invention.
[0025] Fig. 4 illustrates a block diagram depicting the feedback weight calculator unit, according to an embodiment of the present invention.
[0026] Fig. 5 illustrates a block diagram depicting a feedback calculator unit which generates the adaptive feedback for a closed-loop algorithm, according to an embodiment of the present invention.
[0027] Fig. 6 illustrates a flow chart depicting a general process flow of a method for accurate estimation of the target range, according to an exemplary implementation of the present invention.
[0028] Fig. 7 illustrates an adaptive iterative method for passive target ranging, according to an exemplary implementation of the present invention.
[0029] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative methods embodying the principles of the present invention. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in a computer-readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION
[0030] The various embodiments of the present invention disclose an adaptive iterative method and system for passive target ranging.
[0031] In the following description, for purpose of explanation, specific details are outlined in order to provide an understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these details. One skilled in the art will recognize that embodiments of the present invention, some of which are described below, may be incorporated into several systems.
[0032] However, the systems and methods are not limited to the specific embodiments described herein. Further, structures and devices shown in the figures are illustrative of exemplary embodiments of the present invention and are meant to avoid obscuring the present invention.
[0033] It should be noted that the description merely illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present invention. Furthermore, all examples recited herein are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
[0034] In one of the embodiments, the present invention discloses an adaptive iterative method for passive target ranging. The estimation of the target range is performed by combining camera information and the terrain data using photogrammetric techniques.
[0035] In another embodiment, the present invention discloses an iterative method with adaptive intervals between iterations and a noise correction process within the iterative method for passive target ranging. The adaptive intervals between the iterations ensure faster convergence of the iterative method, and the noise correction process helps to mitigate the noise present in the received data from one or more inertial measurement unit sensors.
[0036] In another embodiment of the present invention, finding of the range of single and multiple targets with in the field of view of a single camera is calculated by passive means.
[0037] In another embodiment, the present invention uses an image of a target of interest captured with the single-camera, information about internal and external parameters of the camera, and information about the terrain under consideration to estimate the target range from the camera. The present invention is capable of estimating the target range passively while minimizing sensor noise thereby reducing noise in the estimated range. In addition to this, the present invention is simple enough to be implemented in real-time.
[0038] In an exemplary implementation, the present invention discloses an adaptive iterative method for passive target ranging, the method comprising the steps of calculating, by a test angle calculator unit, a test angle (?i); selecting, by a sampler unit, a plurality of data obtained from one or more IMU (inertial measurement unit) sensors; processing, by the sampler unit, a set of data to obtain height (hi) of the camera and a depression angle (?di) of a line of sight; eliminating, by a running mean and variance calculator unit, noise present in the received sampler data in run time; calculating, by a test height computation, a test height (ht) at each iteration; extracting, by a terrain height extraction unit (212), a terrain height (hd) from a digital elevation model (DEM) data; calculating, by a feedback weight calculator unit, an adaptive feedback weight using the sampler data and the running mean and variance calculator data; calculating, by a feedback calculator unit, a feedback factor using the test height (ht), the terrain height (hd), and the adaptive feedback weight; comparing, by a comparator unit, the value of the feedback factor with a predefined threshold value; and estimating a range of the target based on the comparator value.
[0039] In another embodiment, the present invention discloses a system for passive target ranging. The system of the present invention includes a sampler unit configured to select a plurality of data obtained from one or more IMU (inertial measurement unit) sensors, and processing a set of data to obtain height (hi) of the camera and a depression angle (?di) of a line of sight. A running mean and variance calculator unit is configured to eliminate a noise present in the received sampler data in run time.
[0040] The system disclosed in the present invention further includes a test height computation unit configured to calculate a test height (ht) at each iteration, and a terrain height extraction unit is configured to extract, a terrain height (hd) from a digital elevation model (DEM) data. The system further includes the feedback weight calculator unit which is configured to calculate an adaptive feedback weight using the received sampler data and the received running mean and variance calculator data. The feedback calculator unit is configured to calculate a feedback factor using the test height (ht), the terrain height (hd), and the adaptive feedback weight. The data obtained from the feedback calculator is send to a comparator unit. The comparator unit is configured to compare the received value of the feedback factor with a predefined threshold value. During the comparison, if the value of the feedback factor is less than the predefined threshold value, the range of the target is estimated. If the value of the feedback factor is more than the value of the predefined threshold, then the feedback is provided to the test angle calculator unit. The test angle calculator is configured to update a test angle (?i) at each iteration using feedback from a previous iteration.
[0041] In another embodiment, the expression “image” in the present invention refers to a digital data or digital representation of analog electrical signals coming via analog-to-digital converter from each of the detector or sensor and particularly visible sensor wherein an output of individual detectors are multiplexed to the Read-Out Integrated Circuit and represent the brightness and color level measured by the detector and vary from the image source to image source having a fixed range depending on the sensor.
[0042] In another embodiment, the expression “camera” in the present invention refers to any digital device consisting of hardware and/or software capable of generating an image of a real-world scene.
[0043] In another embodiment, the expression “target” in the present invention refers to any real-world object of interest being captured by the camera. The expression "internal and external parameter of camera” refers to set parameters that define the mathematical model of the camera and the 3-dimensional position and orientation of the camera in the real world.
[0044] In another embodiment, the expression "information about terrain" in the present invention refers to the height of terrain (i.e. earth surface) above the sea level which is obtained from the digital elevation model (DEM) of the terrain under consideration.
[0045] In another embodiment, the present invention discloses a calculation of the test angle at each iteration using feedback from the previous iteration. The feedback at each iteration is the feedback obtained from the comparator. After calculating the test angle, the test angle calculator updates the test angle and send the updated test angle further to the test height computation unit and the terrain height extraction unit. Updating the test angle at each iteration is based on a feedback factor obtained from the feedback calculator. The feedback factor controls the effect of the feedback on the test angle for the next iteration. The feedback factor is obtained by calculating the difference between the test height value and the terrain height value at the given iteration, and then multiplying the difference value with the feedback weight.
[0046] In another exemplary implementation, the present invention discloses a method for computing running mean and variance of sampled data to model the sensor's data and minimize the effect of sensor noise.
[0047] It should be noted that the description merely illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present invention. Furthermore, all examples recited herein are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
[0048] Fig. 1 illustrates a schematic diagram depicting a general case scenario of target ranging, according to an embodiment of the present invention.
[0049] In Fig. 1, a probing platform (102) (which can be any kind of manned or unmanned aerial platform like an aircraft, UAV, balloon, etc or can be a ground station at higher altitude). In the present invention, an ownship as the probing platform or an airborne platform flying over the earth’s surface is considered. A camera (point A) installed on the ownship (102) is oriented downwards to capture an image of a target (106) at point B that is to be estimated. Using the internal and external parameters of the camera, a line of the sight (line ACB) from the camera is to be calculated. After estimating the line of sight, the intersection point of this line of sight with the earth’s surface (point B) is estimated using an adaptive iterative method (discussed later). Once the intersection point is known, the range is calculated by calculating distance r between point A and point B.
[0050] In Fig. 1, the range of the target (106) that is to be estimated is calculated by calculating the height of the camera (hi) from the sea level, the radius of the earth (Re), and a depression angle (?di) of the camera and the line of sight (line ACB). The height (hi) of the camera, the radius of the earth (Re), and the depression angle (?di) of the camera and the line of sight (LOS) is combined to estimate distance r between the point A and the target (106) at point B.
[0051] The dotted line shown in the Fig. 1 goes from the earth’s center to a point C. At any variable ?i step size, a variable ht is present. Further, a distance ht from the sea level to point C is measured, by using geometrical formulation.
The value of the terrain information i.e. height of the terrain (hd) above the sea level is also calculated, where the value of the height of the terrain (hd) is obtained from a (digital elevation model) DEM model. After achieving the test height (ht) and height of the terrain (hd) values, an error is checked by subtracting the value of the height of the terrain (hd) from the value of the test height (ht.) to obtain the exact location of the target. The time, at which said the value of the error becomes zero while moving from point A to B, it is assumed that the LOS is intersecting the terrain, and at that time the location of the target that is to be estimated is determined. Once the intersection point is calculated, the range is computed by calculating the distance r between point and point .
[0052] Fig. 2 illustrates a block diagram (200) depicting a general process flow of an adaptive iterative method, according to an exemplary implementation of the present invention.
[0053] In Fig. 2, a test angle calculator unit (202) calculates a test angle at each iteration using feedback obtained from a previous iteration i.e. from a comparator unit (218), and a user-defined parameter according to Equation 1.
(1)
where is test angle, is feedback from the comparator unit (218) and is the user-defined gain parameter. Higher the value of more will be the effect of feedback and vice-versa. After calculating the test angle based on the feedback obtained from the comparator unit (218), the test angle calculator (202) updates the test angle ( ) and send the updated value to a test height computation unit (210), and a terrain height extraction unit (212). Calculation of the test angle (?i) at each iteration is performed using the feedback from the previous iteration. The feedback at each iteration is the feedback obtained from the comparator (218).
Upon receiving the updated value of interval, a sampler unit (204) selects a plurality of data obtained from one or more inertial measurement unit (IMU) sensors. After selecting, the sampler (204) processes a set of data to obtain the height of the camera from sea level and the depression angle of the line of sight. The sampler data is send further to a running mean and variance calculator unit (206, 208), and a feedback weight calculator unit (214).
[0054] The running mean and variance calculator unit (206, 208) eliminates a noise present in the received set of data obtained from the sampler unit (204). The running mean and variance calculator unit (206, 208) calculates mean and valence value from the given equation:
(2)
(3)
(4)
(5)
[0055] The running means ( ) and variances ( ) value obtained from the given Equation (2-5) indicates the noiseless values of data selected by a sampler and the deviation of the selected data from this noiseless value respectively at each iteration. Meaning the mean is the actual value of the sampler data and the variance is the amount of noise present in the sampler data. In each iteration, the means and variances are updated to model the values of sensors data in a better way.
[0056] The test height computation unit (210) disclosed in the present invention calculates a test height (ht) at each iteration. The test height computation unit (210) computes the test height (ht) using Equation 6:
(6)
where (?i) is the updated value of the test angle calculator (218).
Further, the terrain height extraction unit (212) extracts the terrain height (hd) from data obtained from the digital elevation model (DEM) data at each iteration. The data obtained from the test height computation unit (210) and the terrain height extraction unit (212) is send to a feedback calculator unit (208).
A feedback weight calculator block (214) of the present invention calculates an adaptive feedback weight using the received sampler data and the received running mean and variance calculator data. The adaptive feedback weight is calculated by using Equation 7:
(7)
The calculated feedback weight data is send further to the feedback calculator unit (216). The feedback calculator unit (216) now has the adaptive feedback weight value obtained from the feedback weight calculator unit (214), the test height (ht) value obtained from the test height computation unit (210), and the terrain height (hd) value obtained from the terrain height extraction unit (212). The feedback calculator unit (216) calculates a feedback factor from the received data which drives the test angle calculator unit (202). The feedback factor is calculated using given Equation 8:
(8)
[0057] The feedback factor is the difference between the output obtained from the test height computation unit (210) and the terrain height extraction unit (212) block multiplied with the feedback weight. A larger difference between the test height (ht), and the terrain height (hd) generates larger feedback which in-turn results in a larger interval between current and next iteration, whereas a smaller difference results in a smaller interval between current and next iteration. Thus, when the test point (the camera at point A) is far away from the target (106) at point B, the interval is larger resulting in coarser and faster steps, while as the test point reaches closer to target the interval size goes on decreasing resulting in slower and finer steps.
[0058] The comparator unit (218) of the present invention at the end of each iteration compares the value of the feedback factor with a predefined threshold value. During comparison by the comparator unit (218), if the feedback factor value is less than the predefined threshold value the test point is assumed to be the target point. With this target point, the range of the estimated target or the distance r from the point A to B is calculated using the given Equation 9:
(9)
[0059] Fig. 3 illustrates a block diagram (300) depicting running mean and variance calculator unit (206, 208), according to an embodiment of the present invention.
[0060] In Fig. 3, Equation (2-5) is used to calculate mean and variance respectively.
(2)
(3)
(4)
(5)
The running means ( ) and variances ( ) obtained from the running mean and variance calculator unit (206, 208) indicates the noiseless values of data processed by the sampler (204), and the deviation of the processed data from this noiseless value respectively at each iteration. In each iteration, the means and variances are updated to model the values of sensor data in a better way.
[0061] The test height computation unit (210) of the present invention computes the test height (ht) at each iterations using Equation 6.
(6)
where is the radius of Earth. This equation is derived from the trigonometric properties of triangles shown in (100). The terrain height extraction unit (212) extracts the terrain height (hd) at each iteration from the data obtained from the DEM.
[0062] The feedback weight calculator (214) computes a weight at each iteration. This weight is a measure of noise present in the set of data processed by the sampler (204) at each iteration. The weight is computed using the give Equation 7:
(7)
[0063] Fig. 4 illustrate a block diagram (400) depicting the feedback weight calculator (214), according to an embodiment of the present invention.
[0064] In Fig. 4 the feedback weight calculator (214), where (402) and (404) are inverter blocks performing a mathematical inverse operation, (406) and (408) are performing a square operation and (410) is a block whose output is exponential of input. Assuming that the means and variance obtained from the running mean and variance calculator (206, 208) model the sensor data correctly, the feedback weight measures how well does the data sampled in a given iteration fit this model. When the data sampled in a given iteration is closer to the running mean values, the feedback weight is higher, indicating that the sampled data is less likely to be corrupted by the noise and hence contributed more to the feedback by making it stronger. On the other hand, when the data processed in a given iteration is farther away from running mean values, the feedback weight is lesser, indicating that sampled data is more likely to be corrupted by the noise and hence, should contribute less by making the feedback weaker. This adapting weighting process for feedback ensures that the noisy data samples do not corrupt the feedback loop.
[0065] Fig. 5 illustrates a block diagram (500) depicting the feedback calculator (216) that generates the adaptive feedback for a closed-loop algorithm, according to an embodiment of the present invention.
[0066] The feedback calculator unit (216) computes the feedback factor that drives the test angle calculator unit (202). This feedback is calculated by using the given Equation 8:
(8)
[0067] The feedback factor is the difference between the output of the test height computation unit (210) and the terrain height extraction unit (212) multiplied with the feedback weight. A larger difference between said test height and said terrain height generates larger feedback which in-turn results in a larger interval between current and next iteration, whereas a smaller difference results in a smaller interval between current and next iteration. Thus, when said test point (the camera) is far away from said target (106) at point B, the interval is larger resulting in coarser and faster steps, while as the test point reaches closer to said target the interval size goes on decreasing resulting in slower and finer steps.
[0068] This adaptive feedback optimises the number of iteration required and hence convergence time of the iterative method. At the end of each iteration, the comparator (218) compares said feedback factor with the predefined threshold value, if the feedback factor is less than the predefined threshold value the iterative method is assumed to be converged and said test point is assumed to be the target point, and if the feedback factor value is more than the predefined threshold value said feedback is send to said test angle calculator (202) for next iteration. The test angle calculator (202), then again calculates the test angle based on the feedback from the previous iteration. This process is repeated until the estimated target range is not achieved. The feedback at each iteration is the feedback obtained from the comparator (218).
With this target point, the range or distance r from point A to B is calculated by using given Equation 9:
(9)
[0069] Fig. 6 illustrates a flow chart depicting a general process flow of a method for accurate estimation of the target range, according to an exemplary implementation of the present invention.
[0070] At step (602), initialization of iteration variables. At step (604) calculating and updating the test angle (?i) based on the previous iteration i.e. the feedback obtained from the comparator is performed. The updated test angle data is send to the terrain height extraction unit (212) and the test height computation unit (210). Then in the next step (606), selecting the plurality of data obtained from one or more IMU sensors is performed. Further, the sampler unit (204) processes the set of data to obtain the height (hi) of the camera and the depression angle (?di) of the line of sight. At step (608) elimination of the noise present in the received sampler data is performed. In the next step (610) calculating the test height using Equation 6 is performed. Then at step (612) extracting the terrain height from the DEM data is performed. Further, at step (614) calculating the feedback factor from the data obtained from the test height computation, tertian height extraction, and the feedback weight calculator is performed. Further at step (616), data is sent to the comparator unit (218), and in the final step (616) computation to achieve the range of the target is performed.
[0071] Fig. 7 illustrates an adaptive iterative method for passive target ranging, according to an exemplary implementation of the present invention.
[0072] Referring now to Fig. 7 which illustrates a flowchart (700) of an adaptive iterative method for estimating a range of the target from an airborne platform, according to an exemplary implementation of the present invention. The flow chart 700 of Fig. 7 is explained below with reference to Fig.2 as described above.
[0073] At step 702, calculating, by a test angle calculator (202), of a test angle (?i).
[0074] At step 704, selecting, by a sampler unit (204), a plurality of data obtained from one or more IMU (inertial measurement unit) sensors.
[0075] At step 706, processing, by a sampler unit (204), a set of data to obtain height (hi) of a camera and a depression angle (?di) of a line of sight.
[0076] At step 708, eliminating, by a running mean and variance calculator unit (206, 208), noise present in the received sampler data in run time.
[0077] At step 710, calculating, by a test height computation unit (210), test height (ht) at each iteration.
[0078] At step 712, extracting, by a terrain height extraction unit (212), a terrain height (hd) from a digital elevation model (DEM) data.
[0079] A step 714, calculating, by a feedback weight calculator unit (214), an adaptive feedback weight using the sampler data and the running mean and variance calculator data.
[0080] A step 716, calculating, by a feedback calculator unit (216), feedback factor using the test height (ht), the terrain height (hd), and the adaptive feedback weight.
[0081] At step 718, comparing, by a comparator unit (218), the value of the feedback factor with a predefined threshold value.
[0082] At step 720, estimating the range of a target based on the comparator unit (218) value.
It should be noted that the description merely illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present invention. Furthermore, all examples recited herein are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
REFERENCE NUMERALS:
102: AIRBORNE PLATFORM
104: CAMERA
106: TARGET
202: TEAST ANGLE CALCULATOR UNIT
204: SAMPLER UNIT
206, 208: RUNNING MEAN AND VARIANCE UNIT
210: TEST HEIGHT COMPUTATION UNIT
212: TEST HEIGHT EXTRACTION UNIT
214: FEEDBACK WEIGHT CALCULATOR UNIT
216: FEEDBACK CALCULATOR UNIT
218: COMPARATOR UNIT
402, 404: INVERTOR BLOCKS.
,CLAIMS:
1. An adaptive iterative method for passive target ranging, the method comprising:
calculating, by a test angle calculator unit (202), a test angle (?i);
selecting, by a sampler unit (204), a plurality of data obtained from one or more IMU (inertial measurement unit) sensors;
processing, by the sampler unit (204), a set of data to obtain height (hi) of a camera and a depression angle (?di) of a line of sight;
eliminating, by a running mean and variance calculator unit (206, 208), noise present in the received sampler data in run time;
calculating, by a test height computation unit (210), a test height (ht) at each iteration;
extracting, by a terrain height extraction unit (212), a terrain height (hd) from a digital elevation model (DEM) data;
calculating, by a feedback weight calculator unit (214), an adaptive feedback weight using the sampler data and the running mean and variance calculator data ;
calculating, by a feedback calculator unit (216), a feedback factor using the test height (ht), the terrain height (hd), and the adaptive feedback weight;
comparing, by a comparator unit (218), the value of the feedback factor with a predefined threshold value; and
estimating a range of the target based on the comparator (218) value.
2. The method as claimed in claim 1, wherein the calculation of the test angle (?i) at each iteration is performed using a feedback from a previous iteration.
3. The method as claimed in claim 2, wherein the feedback at each iteration is the feedback obtained from the comparator (218).
4. The method as claimed in claim 2, wherein the test angle calculator unit (202) updates the test angle (?i) after calculating the test angle at each iteration.
5. The method as claimed in claim 4, wherein the updated test angle (?i) is send to the test height computation unit (210) and the terrain height extraction unit (212).
6. The method as claimed in claim 1, wherein the inertial measurement unit sensors data includes height (hi) of a camera (104) from a sea level, a depression angle (?di) of the camera (104) & line of sight from the target (106), and a radius of the earth (Re).
7. The method as claimed in claim 1, wherein the camera (104) is installed on an airborne platform (102).
8. The method as claimed in claim 1, wherein the running mean and variance calculator unit (206, 208) eliminates a noise present in the received sampler data.
9. The method as claimed in claim 1, wherein the mean is the actual value of the sampler data and the variance is the amount of noise present in the sampler data.
10. The method as claimed in claim 1, wherein the feedback factor calculation comprising:
calculating the difference between the test height (ht) and the terrain height (hd) values at the given iteration; and
multiplying the difference value with the adaptive feedback weight.
11. The method as claimed in claim 10, wherein the feedback factor controls the effect of the feedback on the test angle for the next iteration.
12. The method as claimed in claim 1, wherein during comparison by the comparator unit (218), if the feedback factor value is less than the predefined threshold value the range of the target is estimated.
13. The method as claimed in claim 1, wherein during the comparison by the comparator unit (218), if the feedback factor value is more than the predefined threshold value, the feedback is provided to the test angle calculator (202).
14. A system for passive target ranging, the system comprising:
a test angle calculator unit (202) configured to calculate a test angle (?i);
a sampler unit (204) configured to select a plurality of data obtained from one or more IMU (inertial measurement unit) sensors, and
processing process a set of data to obtain height (hi) of a camera and a depression angle (?di) of a line of sight;
a running mean and variance calculator unit (206, 208) configured to eliminate a noise present in the received sampled data in run time;
a test height computation unit (210) configured to calculate a test height (ht) at each iteration;
a terrain height extraction unit (212) configured to extract, a terrain height (hd) from a digital elevation model (DEM) data;
a feedback weight calculator unit (214) configured to calculate an adaptive feedback weight using the sampler data and the running mean and variance calculator data;
a feedback calculator unit (216) configured to calculate a feedback factor using the test height (ht), the terrain height (hd), and the adaptive feedback weight;
a comparator unit (218) configured to:
compare the value of the feedback factor with a predefined threshold value, and
estimate the range of the target based on the comparison by the comparator unit (218).
15. The system as claimed in claim 1, wherein the test angle calculator unit (202) calculates a test angle (?i) at each iteration using feedback from the comparator unit (218).
| Section | Controller | Decision Date |
|---|---|---|
| 15 & 43(1) | Surajit Das | 2024-07-09 |
| 15 & 43(1) | Surajit Das | 2024-07-09 |
| # | Name | Date |
|---|---|---|
| 1 | 202041013297-PROVISIONAL SPECIFICATION [26-03-2020(online)].pdf | 2020-03-26 |
| 1 | 202041013297-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |
| 2 | 202041013297-FORM 1 [26-03-2020(online)].pdf | 2020-03-26 |
| 2 | 202041013297-PROOF OF ALTERATION [04-10-2024(online)].pdf | 2024-10-04 |
| 3 | 202041013297-IntimationOfGrant09-07-2024.pdf | 2024-07-09 |
| 3 | 202041013297-DRAWINGS [26-03-2020(online)].pdf | 2020-03-26 |
| 4 | 202041013297-PatentCertificate09-07-2024.pdf | 2024-07-09 |
| 4 | 202041013297-FORM 3 [08-05-2020(online)].pdf | 2020-05-08 |
| 5 | 202041013297-Written submissions and relevant documents [05-07-2024(online)]-1.pdf | 2024-07-05 |
| 5 | 202041013297-ENDORSEMENT BY INVENTORS [08-05-2020(online)].pdf | 2020-05-08 |
| 6 | 202041013297-Written submissions and relevant documents [05-07-2024(online)].pdf | 2024-07-05 |
| 6 | 202041013297-DRAWING [08-05-2020(online)].pdf | 2020-05-08 |
| 7 | 202041013297-CORRESPONDENCE-OTHERS [08-05-2020(online)].pdf | 2020-05-08 |
| 7 | 202041013297-Correspondence to notify the Controller [19-06-2024(online)].pdf | 2024-06-19 |
| 8 | 202041013297-US(14)-HearingNotice-(HearingDate-20-06-2024).pdf | 2024-05-27 |
| 8 | 202041013297-COMPLETE SPECIFICATION [08-05-2020(online)].pdf | 2020-05-08 |
| 9 | 202041013297 Reply from Defence.pdf | 2023-07-10 |
| 9 | 202041013297-FORM-26 [21-06-2020(online)].pdf | 2020-06-21 |
| 10 | 202041013297-CORRESPONDENCE [28-04-2023(online)].pdf | 2023-04-28 |
| 10 | 202041013297-FORM-26 [25-06-2020(online)].pdf | 2020-06-25 |
| 11 | 202041013297-FER_SER_REPLY [28-04-2023(online)].pdf | 2023-04-28 |
| 11 | 202041013297-Proof of Right [25-09-2020(online)].pdf | 2020-09-25 |
| 12 | 202041013297-Form 1_(After Filing)_08-10-2020.pdf | 2020-10-08 |
| 12 | 202041013297-OTHERS [28-04-2023(online)].pdf | 2023-04-28 |
| 13 | 202041013297-Correspondence_08-10-2020.pdf | 2020-10-08 |
| 13 | 202041013297-FER.pdf | 2022-11-01 |
| 14 | 202041013297-Abstract.jpg | 2021-10-18 |
| 14 | 202041013297-Defence-25-10-2022.pdf | 2022-10-25 |
| 15 | 202041013297-FORM 18 [27-06-2022(online)].pdf | 2022-06-27 |
| 16 | 202041013297-Abstract.jpg | 2021-10-18 |
| 16 | 202041013297-Defence-25-10-2022.pdf | 2022-10-25 |
| 17 | 202041013297-FER.pdf | 2022-11-01 |
| 17 | 202041013297-Correspondence_08-10-2020.pdf | 2020-10-08 |
| 18 | 202041013297-OTHERS [28-04-2023(online)].pdf | 2023-04-28 |
| 18 | 202041013297-Form 1_(After Filing)_08-10-2020.pdf | 2020-10-08 |
| 19 | 202041013297-FER_SER_REPLY [28-04-2023(online)].pdf | 2023-04-28 |
| 19 | 202041013297-Proof of Right [25-09-2020(online)].pdf | 2020-09-25 |
| 20 | 202041013297-CORRESPONDENCE [28-04-2023(online)].pdf | 2023-04-28 |
| 20 | 202041013297-FORM-26 [25-06-2020(online)].pdf | 2020-06-25 |
| 21 | 202041013297 Reply from Defence.pdf | 2023-07-10 |
| 21 | 202041013297-FORM-26 [21-06-2020(online)].pdf | 2020-06-21 |
| 22 | 202041013297-COMPLETE SPECIFICATION [08-05-2020(online)].pdf | 2020-05-08 |
| 22 | 202041013297-US(14)-HearingNotice-(HearingDate-20-06-2024).pdf | 2024-05-27 |
| 23 | 202041013297-Correspondence to notify the Controller [19-06-2024(online)].pdf | 2024-06-19 |
| 23 | 202041013297-CORRESPONDENCE-OTHERS [08-05-2020(online)].pdf | 2020-05-08 |
| 24 | 202041013297-DRAWING [08-05-2020(online)].pdf | 2020-05-08 |
| 24 | 202041013297-Written submissions and relevant documents [05-07-2024(online)].pdf | 2024-07-05 |
| 25 | 202041013297-Written submissions and relevant documents [05-07-2024(online)]-1.pdf | 2024-07-05 |
| 25 | 202041013297-ENDORSEMENT BY INVENTORS [08-05-2020(online)].pdf | 2020-05-08 |
| 26 | 202041013297-PatentCertificate09-07-2024.pdf | 2024-07-09 |
| 26 | 202041013297-FORM 3 [08-05-2020(online)].pdf | 2020-05-08 |
| 27 | 202041013297-IntimationOfGrant09-07-2024.pdf | 2024-07-09 |
| 27 | 202041013297-DRAWINGS [26-03-2020(online)].pdf | 2020-03-26 |
| 28 | 202041013297-PROOF OF ALTERATION [04-10-2024(online)].pdf | 2024-10-04 |
| 28 | 202041013297-FORM 1 [26-03-2020(online)].pdf | 2020-03-26 |
| 29 | 202041013297-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |
| 29 | 202041013297-PROVISIONAL SPECIFICATION [26-03-2020(online)].pdf | 2020-03-26 |
| 1 | 202041013297(1)E_13-10-2022.pdf |
| 1 | FER-2022-10-13-14-04-11AE_02-05-2023.pdf |
| 2 | 202041013297(1)E_13-10-2022.pdf |
| 2 | FER-2022-10-13-14-04-11AE_02-05-2023.pdf |