Systems And Methods For Estimating And Correcting Errors In Inertial Sensors
Abstract:
Accelerometers accumulate errors very quickly, even for very short intervals. Existing techniques use Kalman filter(s) to correct errors in sensor fusion models, which in turn use device and sensor-specific error parameters. This requires a training session for a device with a different sensor set and may be prone to errors as the device is moving. Present disclosure provides systems and methods where distance estimation is performed by double integrating accelerometer values and free from drifting errors. Accumulated errors in accelerometer are modeled as linear to time (session basis) with zero constraints, and the method is repeated on velocity values, wherein each session starts from stationary state and ends in a stationary state (e.g., a swipe gesture using the device along the distance to be measured).
Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence
Claims:1. A processor-implemented method, comprising:
continually receiving by one or more hardware processors, a first sensor data pertaining to a first sensor and a second sensor data pertaining to a second sensor, wherein the first sensor and the second sensor are associated with a computing device (202);
filtering, by the one or more hardware processors, gravity data from the second sensor data, using the first sensor, to obtain a filtered sensor data (204);
splitting, by the one or more hardware processors, the filtered sensor data into a plurality of windows, wherein each of the plurality of windows is of a length corresponding to a pre-determined time interval, and wherein each of the plurality of windows comprises one or more unique sensor data samples from the filtered sensor data (206);
identifying, by the one or more hardware processors, a first start window (?ws?_1) from the plurality of windows having a standard deviation V?ws?_1 less than a first pre-defined threshold (S_L) and a second start window (?ws?_2) having a standard deviation V?ws?_2 greater than a second pre-defined threshold (S_H) (208);
identifying, by the one or more hardware processors, at least one start window (?ws?_12) between the first start window (?ws?_1) and the second start window (?ws?_2), the at least one start window (?ws?_12) comprises a standard deviation (V?ws?_12 ) that is less than the first pre-defined threshold (S_L) (210);
identifying, by the one or more hardware processors, a start sensor data sample ‘s’ between the at least one identified start window (?ws?_12) and the second start window (?ws?_2) based on (i) a mean for, and (ii) a value associated with, one or more sensor data samples that are between the first start window (?ws?_1) and the identified at least one start window (?ws?_12) (212);
identifying, by the one or more hardware processors, at least one gesture being performed on the computing device for measuring a distance of an object in a three dimensional (3D) space (214);
identifying, by the one or more hardware processors, a second end window (?we?_2) after a first end window (?we?_1) is identified, wherein the identified first end window ?we?_1, the identified second end window (?we?_2), and one or more windows between the identified first end window ?we?_1 and the identified second end window (?we?_2) comprise a standard deviation that is less than the first pre-defined threshold (S_L) (216);
identifying, by the one or more hardware processors, an end sensor data sample ‘e’ from the first end window (?we?_1) based on a comparison of a mean and value associated with one or more data samples from the first end window (?we?_1) (218);
normalizing, by the one or more hardware processors, data samples between the start sensor data sample ‘s’ and the end sensor data sample ‘e’ to obtain normalized data samples (220);
estimating, by the one or more hardware processors, (i) a first correction acceleration factor (?_a) based on a normalized value ?la?_1 (e) associated with an end normalized data sample from the normalized data samples and number of data samples in the normalized data samples and (ii) a second correction acceleration factor (d_i^a) based on the first correction factor (?_a) and number of data samples between a start normalized data sample and a current normalized data sample from the normalized data samples (222);
generating, by the one or more hardware processors, one or more velocity data samples by (i) correcting the normalized data samples using the second correction acceleration factor to obtain corrected data samples and (ii) integrating, by using a Trapezoidal integration technique, the corrected data samples (224);
estimating, by the one or more hardware processors, (i) a first correction velocity factor (?_v) based on a value associated with an end velocity data sample from the one or more velocity data samples and number of data samples in the one or more velocity data samples and (ii) a second correction velocity factor (d_i^v) based on the first correction velocity factor (?_v) and number of data samples between a start velocity data sample and a current velocity data sample from the one or more velocity data samples (226); and
estimating, by the one or more hardware processors, a measured distance pertaining a start point and an end point of the identified at least one gesture by (i) generating a corrected set of velocity data samples based on the estimated second correction velocity factor (d_i^v) and (ii) integrating, by using the Trapezoidal integration technique, the corrected set of velocity data samples (228).
2. The method as claimed in claim 1, wherein the step of filtering gravity data from the second sensor data, using the first sensor, to obtain a filtered sensor data comprises:
estimating an initial gravity vector from the second sensor data, wherein the initial gravity vector is estimated in a coordinate system of the computing device;
computing an axis angle representation indicating a rotation of the computing device, using the first sensor data;
estimating an Euler-Rodriguez’s rotation matrix based on the axis angle representation;
estimating an updated gravity vector based on the Euler-Rodriguez’s rotation matrix, wherein the updated gravity vector is estimated in the coordinate system of the computing device, and wherein the updated gravity vector comprises gravity data; and
subtracting the gravity data from the second sensor data.
3. The method as claimed in claim 1, wherein the step of identifying the first start window (?ws?_1) and the second start window (?ws?_2) comprises:
computing a standard deviation for each window from the plurality of windows; and
performing a comparison the standard deviation computed for each window with the first pre-defined threshold (S_L) and a second pre-defined threshold (S_H).
4. The method as claimed in claim 1, wherein the step of identifying a start sensor data sample ‘s’ between the at least one identified start window (?ws?_12) and the second start window (?ws?_2) comprises:
computing the mean for the one or more sensor data samples present between the first start window (?ws?_1) and the at least one identified start window (?ws?_12); and
performing a comparison of the mean and a value associated with one or more sensor data samples between the at least one identified start window (?ws?_12) and the second start window (?ws?_2).
5. The method as claimed in claim 1, wherein the step of identifying a second end window (?we?_2) comprises:
identifying the first end window (?we?_1) after the at least one gesture is performed; and
performing a comparison of a standard deviation associated with the first end window (?we?_1) and the first pre-defined threshold (S_L).
6. The method as claimed in claim 1, wherein the step of identifying an end sensor data sample ‘e’ comprises:
computing a mean for sensor data samples present between the identified first end window (?we?_1) and the second end window (?we?_2); and
performing a comparison of the mean and value associated with one or more data samples from the first end window (?we?_1).
7. A system (100) comprising:
a memory (102) storing instructions and one or more modules (108);
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to execute the one or more modules (108) comprising:
a data processing module that is configured to continually receive, a first sensor data pertaining to a first sensor and a second sensor data pertaining to a second sensor, wherein the first sensor and the second sensor are associated with a computing device;
a orientation module that is configured to filter gravity data from the second sensor data, using the first sensor, to obtain a filtered sensor data;
a session demarcation module that is configure to:
split the filtered sensor data into a plurality of windows, wherein each of the plurality of windows is of a length corresponding to a pre-determined time interval, and wherein each of the plurality of windows comprises one or more unique sensor data samples from the filtered sensor data;
identify a first start window (?ws?_1) from the plurality of windows having a standard deviation (V?ws?_1) less than a first pre-defined threshold (S_L) and a second start window (?ws?_2) having a standard deviation (V?ws?_2) greater than a second pre-defined threshold (S_H);
identify at least one start window (?ws?_12) between the first start window (?ws?_1) and the second start window (?ws?_2), the at least one start window (?ws?_12) comprises a standard deviation (V?ws?_12 ) that is less than the first pre-defined threshold (S_L);
identify a start sensor data sample ‘s’ between the at least one identified start window (?ws?_12) and the second start window (?ws?_2) based on (i) a mean for, and (ii) a value associated with, one or more sensor data samples that are between the first start window (?ws?_1) and the identified at least one start window (?ws?_12);
identify at least one gesture being performed on the computing device for measuring a distance of an object in a three dimensional (3D) space;
identify a second end window (?we?_2) such that the identified second end window (?we?_2) is identified after a first end window (?we?_1) is identified, wherein the first end window ?we?_1, the second end window (?we?_2), and one or more end windows between the first end window (?we?_1) and the second end window (?we?_2) comprise a standard deviation that is less than the first pre-defined threshold (S_L); and
identify an end sensor data sample ‘e’ from the first end window (?we?_1) based on a comparison of a mean and value associated with one or more data samples from the first window (?we?_1); and
an error estimation and correction module that is configured to:
normalize data samples between the start sensor data sample ‘s’ and the end sensor data sample ‘e’ to obtain normalized data samples;
estimate (i) a first correction acceleration factor (?_a) based on a normalized value ?la?_1 (e) associated with an end normalized data sample from the normalized data samples and number of data samples in the normalized data samples and (ii) a second correction acceleration factor (d_i^a) based on the first correction factor (?_a) and number of data samples between a start normalized data sample and a current normalized data sample from the normalized data samples;
generate one or more velocity data samples by (i) correcting the normalized data samples using the second correction acceleration factor to obtain corrected data samples and (ii) integrating, by using a Trapezoidal integration technique, the corrected data samples;
estimate (i) a first correction velocity factor (?_v) based on a value associated with an end velocity data sample from the one or more velocity data samples and number of data samples in the one or more velocity data samples and (ii) a second correction velocity factor (d_i^v) based on the first correction velocity factor (?_v) and number of data samples between a start velocity data sample and a current velocity data sample from the one or more velocity data samples; and
estimate a measured distance pertaining a start point and an end point of the identified at least one gesture by (i) generating a corrected set of velocity data samples based on the estimated second correction velocity factor (d_i^v) and (ii) integrating, by using the Trapezoidal integration technique, the corrected set of velocity data samples.
8. The system as claimed in claim 7, wherein the gravity data is filtered from the second sensor data by:
estimating an initial gravity vector from the second sensor data, wherein the initial gravity vector is estimated in a coordinate system of the computing device;
computing an axis angle representation indicating a rotation of the computing device, using the first sensor data;
estimating an Euler-Rodriguez’s rotation matrix based on the axis angle representation;
estimating an updated gravity vector based on the Euler-Rodriguez’s rotation matrix, wherein the updated gravity vector is estimated in the coordinate system of the computing device, and wherein the updated gravity vector comprises gravity data; and
subtracting the gravity data from the second sensor data.
9. The system as claimed in claim 7, wherein the first start window (?ws?_1) and the second start window (?ws?_2) are identified by:
computing a standard deviation for each window from the plurality of windows; and
performing a comparison the standard deviation computed for each window with the first pre-defined threshold (S_L) and a second pre-defined threshold (S_H).
10. The system as claimed in claim 7, wherein the start sensor data sample ‘s’ is identified by:
computing the mean for the one or more sensor data samples present between the first start window (?ws?_1) and the identified at least one start window (?ws?_12); and
performing a comparison of the mean and a value associated with one or more sensor data samples between the at least one identified start window (?ws?_12) and the second start window (?ws?_2).
11. The system as claimed in claim 7, wherein the second end window (?we?_2) is identified by:
identifying the first end window (?we?_1) after the at least one gesture is performed; and
performing a comparison a standard deviation associated with the first end window (?we?_1) and the first pre-defined threshold (S_L).
12. The system as claimed in claim 7, wherein the end sensor data sample ‘e’ is identified by:
computing a mean for sensor data samples present between the first end window (?we?_1) and the second end window (?we?_2); and
performing a comparison of the mean and value associated with one or more data samples from the first end window (?we?_1).
, Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
SYSTEMS AND METHODS FOR ESTIMATING AND CORRECTING ERRORS IN INERTIAL SENSORS
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The disclosure herein generally relates to error estimation and correction, and, more particularly, to systems and methods for estimating and correcting errors in inertial sensors.
BACKGROUND
There has been numerous researches where accuracy of accelerometer has been tested in calculating displacement. An appreciable accuracy is only seen when the accelerometers used are very high grade, and the motion is very precise and controlled, such as a robotic arm. Applying such solutions to commercial grade accelerometers, posing a high inaccuracy and a spectrum of errors, is almost irrelevant, since they accumulate a huge amount of error over time. Such accelerometers are in widespread use today, in devices ranging from wrist-worn personal devices (e.g., smartwatches, fit bands) to smartphones. Also these solutions are not sensor-agnostic, and require many error-estimation parameters to be derived for a particular device. Acceptable results in displacement estimation where the user moves the device (e.g., a commercial smartphone) still remains are challenge. In the domain of length/distance measurement, laser range finders are most feasible options, but are expensive, and typically do not work on very small distances and may be prone to errors in daylight of outdoors.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one aspect, there is provided a processor implemented method for estimating and correcting errors in inertial sensors, comprising: continually receiving, a first sensor data pertaining to a first sensor and a second sensor data pertaining to a second sensor, wherein the first sensor and the second sensor are associated with a computing device; filtering gravity data from the second sensor data, using the first sensor, to obtain a filtered sensor data; splitting the filtered sensor data into a plurality of windows, wherein each of the plurality of windows is of a length corresponding to a pre-determined time interval, and wherein each of the plurality of windows comprises one or more unique sensor data samples from the filtered sensor data; identifying a first start window (?ws?_1) having standard deviation V?ws?_1 less than a first pre-defined threshold (S_L) and a second start window (?ws?_2) having standard deviation V?ws?_2 greater than a second pre-defined threshold (S_H); identifying at least one start window (?ws?_12) between the first start window (?ws?_1) and the second start window (?ws?_2), the at least one start window (?ws?_12) comprises a standard deviation (V?ws?_12 ) that is less than the first pre-defined threshold (S_L); identifying a start sensor data sample ‘s’ between the at least one identified start window (?ws?_12) and the second start window (?ws?_2) based on (i) a mean for, and (ii) a value associated with, one or more sensor data samples that are between the first start window (?ws?_1) and the identified at least one start window (?ws?_12); identifying at least one gesture being performed on the computing device for measuring a distance of an object in a three dimensional (3D) space; identifying a second end window (?we?_2) such that the second end window (?we?_2) is identified after a first end window (?we?_1) prior to the second end window (?we?_2), wherein the identified first end window (?we?_1), the second end window (?we?_2), and one or more end windows between the identified first end window (?we?_1) and the second end window (?we?_2) comprise a standard deviation that is less than the first pre-defined threshold (S_L); identifying an end sensor data sample ‘e’ from the first end window (?we?_1) based on a comparison of a mean and value associated with one or more data samples from the first end window (?we?_1); normalizing data samples between the start sensor data sample ‘s’ and the end sensor data sample ‘e’ to obtain normalized data samples; estimate (i) a first correction acceleration factor (?_a) based on a normalized value ?la?_1 (e) associated with the end sensor data sample from the normalized data samples and number of data samples in the normalized data samples and (ii) a second correction acceleration factor (d_i^a) based on the first correction factor (?_a) and number of data samples between a start normalized data sample and a current normalized data sample from the normalized data samples; generating one or more velocity data samples by (i) correcting the normalized data samples using the second correction factor to obtain corrected data samples and (ii) integrating, by using a Trapezoidal integration technique, the corrected data samples; estimating (i) a first correction velocity factor (?_v) based on a value associated with an end velocity data sample from the one or more velocity data samples and number of data samples in the one or more velocity data samples and (ii) a second correction velocity factor (d_i^v) based on the first correction velocity factor (?_v) and number of data samples between a start velocity data sample and a current velocity data sample from the one or more velocity data samples; estimating a measured distance pertaining a start point and an end point of the identified at least one gesture by (i) generating a corrected set of velocity data samples based on the estimated second correction velocity factor (d_i^v) and (ii) integrating, by using the Trapezoidal integration technique, the corrected set of velocity data samples.
In an embodiment, the step of filtering gravity data from the second sensor data, using the first sensor, to obtain a filtered sensor data may comprise estimating an initial gravity vector from the second sensor data, wherein the initial gravity vector is estimated in a coordinate system of the computing device; computing an axis angle representation indicating a rotation of the computing device, using the first sensor data; estimating an Euler-Rodriguez’s rotation matrix based on the axis angle representation; estimating an updated gravity vector based on the Euler-Rodriguez’s rotation matrix, wherein the updated gravity vector is estimated in the coordinate system of the computing device, and wherein the updated gravity vector comprises gravity data; and subtracting the gravity data from the second sensor data.
In an embodiment, the step of identifying the first start window (?ws?_1) and the second start window (?ws?_2) may comprise: computing a standard deviation for each window from the plurality of windows; and performing a comparison the standard deviation computed for each window with the first pre-defined threshold (S_L) and a second pre-defined threshold (S_H).
In an embodiment, the step of identifying a start sensor data sample ‘s’ between the at least one identified start window (?ws?_12) and the second start window (?ws?_2) may comprise computing the mean for the one or more sensor data samples present between the first start window (?ws?_1) and the identified at least one start window (?ws?_12); and performing a comparison of the mean and a value associated with one or more sensor data samples between the at least one identified start window (?ws?_12) and the second start window (?ws?_2).
In an embodiment, the step of identifying a second end window (?we?_2) comprises: identifying a first end window (?we?_1) after the at least one gesture is performed; and performing a comparison a standard deviation associated with the first end window (?we?_1) and the first pre-defined threshold (S_L).
In an embodiment, the step of identifying an end sensor data sample ‘e’ comprises: computing a mean for sensor data samples present between the first end window (?we?_1) and the second end window (?we?_2); and performing a comparison of the mean and value associated with one or more data samples from the first end window (?we?_1).
In another aspect, there is provided a system for estimating and correcting errors in inertial sensors, comprising: a memory storing instructions and one or more modules; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to execute the one or more modules comprising: a data processing module that is configured to continually receive, a first sensor data pertaining to a first sensor and a second sensor data pertaining to a second sensor, wherein the first sensor and the second sensor are associated with a computing device; an orientation module that is configured to filter gravity data from the second sensor data, using the first sensor, to obtain a filtered sensor data; a session demarcation module that is configure to: split the filtered sensor data into a plurality of windows, wherein each of the plurality of windows is of a length corresponding to a pre-determined time interval, and wherein each of the plurality of windows comprises one or more unique sensor data samples from the filtered sensor data; identify a first start window (?ws?_1) having standard deviation V?ws?_1 less than a first pre-defined threshold (S_L) and a second start window (?ws?_2) having standard deviation V?ws?_2 greater than a second pre-defined threshold (S_H); identify at least one start window (?ws?_12) between the first start window (?ws?_1) and the second start window (?ws?_2), the at least one start window (?ws?_12) comprises a standard deviation (V?ws?_12 ) that is less than the first pre-defined threshold (S_L); identify a start sensor data sample ‘s’ between the at least one identified start window (?ws?_12) and the second start window (?ws?_2) based on (i) a mean for, and (ii) a value associated with, one or more sensor data samples that are between the first start window (?ws?_1) and the identified at least one start window (?ws?_12); identify at least one gesture being performed on the computing device for measuring a distance of an object in a three dimensional (3D) space; identify a second end window (?we?_2) such that a first identified end window (?we?_1) is identified prior to the second end window (?we?_2) identification, wherein the identified first window ?we?_1, the second end window (?we?_2), and one or more end windows between the identified first end window ?we?_1 and the identified second end window (?we?_2) comprise a standard deviation that is less than the first pre-defined threshold (S_L); and identify an end sensor data sample ‘e’ from the first end window (?we?_1) based on a comparison of a mean and value associated with one or more data samples from the first end window (?we?_1); and an error estimation and correction module that is configured to: normalize data samples between the start sensor data sample ‘s’ and the end sensor data sample ‘e’ to obtain normalized data samples; estimate (i) a first correction acceleration factor (?_a) based on a normalized value ?la?_1 (e) associated with an end sensor data sample from the normalized data samples and number of data samples in the normalized data samples and (ii) a second correction acceleration factor (d_i^a) based on the first correction factor (?_a) and number of data samples between a start normalized data sample and a current normalized data sample from the normalized data samples; generate one or more velocity data samples by (i) correcting the normalized data samples using the second correction factor to obtain corrected data samples and (ii) integrate, by using a Trapezoidal Integration Technique (TII), the corrected data samples; estimate (i) a first correction velocity factor (?_v) based on a value associated with an end velocity data sample from the one or more velocity data samples and number of data samples in the one or more velocity data samples and (ii) a second correction velocity factor (d_i^v) based on the first correction velocity factor (?_v) and number of data samples between a start velocity data sample and a current velocity data sample from the one or more velocity data samples; and estimate a measured distance pertaining a start point and an end point of the identified at least one gesture by (i) generating a corrected set of velocity data samples based on the estimated second correction velocity factor (d_i^v) and (ii) integrate, by using the TII, the corrected set of velocity data samples.
In an embodiment, the gravity data is filtered from the second sensor data by: estimating an initial gravity vector from the second sensor data, wherein the initial gravity vector is estimated in a coordinate system of the computing device; computing an axis angle representation indicating a rotation of the computing device, using the first sensor data; estimating an Euler-Rodriguez’s rotation matrix based on the axis angle representation; estimating an updated gravity vector based on the Euler-Rodriguez’s rotation matrix, wherein the updated gravity vector is estimated in the coordinate system of the computing device, and wherein the updated gravity vector comprises gravity data; and subtracting the gravity data from the second sensor data. In an embodiment, axis angle representation is obtained from the gyroscope sensor data (or data derived from gyroscope sensor).
In an embodiment, the first start window (?ws?_1) and the second start window (?ws?_2) are identified by: computing a standard deviation for each window from the plurality of windows; and performing a comparison the standard deviation computed for each window with the first pre-defined threshold (S_L) and a second pre-defined threshold (S_H).
In an embodiment, the start sensor data sample ‘s’ is identified by: computing the mean for the one or more sensor data samples present between the first start window (?ws?_1) and the identified at least one start window (?ws?_12); and performing a comparison of the mean and a value associated with one or more sensor data samples between the at least one identified start window (?ws?_12) and the second start window (?ws?_2).
In an embodiment, the second end window (?we?_2) is identified by: identifying the first end window (?we?_1) after the at least one gesture is performed; and performing a comparison a standard deviation associated with the first end window (?we?_1) and the first pre-defined threshold (S_L).
In an embodiment, the end sensor data sample ‘e’ is identified by: computing a mean for sensor data samples present between the identified first end window (?we?_1) and the identified second end window (?we?_2); and performing a comparison of the mean and value associated with one or more data samples from the first end window (?we?_1).
In yet another aspect, there is provided one or more non-transitory machine readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors causes continually receiving, a first sensor data pertaining to a first sensor and a second sensor data pertaining to a second sensor, wherein the first sensor and the second sensor are associated with a computing device; filtering gravity data from the second sensor data, using the first sensor, to obtain a filtered sensor data; splitting the filtered sensor data into a plurality of windows, wherein each of the plurality of windows is of a specific length corresponding to a pre-determined time interval, and wherein each of the plurality of windows comprises one or more unique sensor data samples from the filtered sensor data; identifying a first start window (?ws?_1) having standard deviation (V?ws?_1) less than a first pre-defined threshold (S_L) and a second start window (?ws?_2) having standard deviation (V?ws?_2) greater than a second pre-defined threshold (S_H); identifying at least one start window (?ws?_12) between the first start window (?ws?_1) and the second start window (?ws?_2), the at least one start window (?ws?_12) comprises a standard deviation (V?ws?_12 ) that is less than the first pre-defined threshold (S_L); identifying a start sensor data sample ‘s’ between the at least one identified start window (?ws?_12) and the second start window (?ws?_2) based on (i) a mean for, and (ii) a value associated with, one or more sensor data samples that are between the first start window (?ws?_1) and the identified at least one start window (?ws?_12); identifying at least one gesture being performed on the computing device for measuring a distance of an object in a three dimensional (3D) space; identifying a second end window (?we?_2) such that a first end window (?we?_1) is identified prior to identifying the second end window (?we?_2), wherein the identified first end window (?we?_1), the identified second end window (?we?_2), and one or more end windows between the identified first end window (?we?_1) and the identified second end window (?we?_2) comprise a standard deviation that is less than the first pre-defined threshold (S_L); identifying an end sensor data sample ‘e’ from the first end window (?we?_1) based on a comparison of a computed mean and value associated with one or more data samples from the first end window (?we?_1); normalizing data samples between the start sensor data sample ‘s’ and the end sensor data sample ‘e’ to obtain normalized data samples; estimate (i) a first correction acceleration factor (?_a) based on a normalized value ?la?_1 (e) associated with an end sensor data sample from the normalized data samples and number of data samples in the normalized data samples and (ii) a second correction acceleration factor (d_i^a) based on the first correction factor (?_a) and number of data samples between a start normalized data sample and a current normalized data sample from the normalized data samples; generating one or more velocity data samples by (i) correcting the normalized data samples using the second correction factor to obtain corrected data samples and (ii) integrating, by using a Trapezoidal Integration Technique (TII), the corrected data samples; estimating (i) a first correction velocity factor (?_v) based on a value associated with an end velocity data sample from the one or more velocity data samples and number of data samples in the one or more velocity data samples and (ii) a second correction velocity factor (d_i^v) based on the first correction velocity factor (?_v) and number of data samples between a start velocity data sample and a current velocity data sample from the one or more velocity data samples; estimating a measured distance pertaining a start point and an end point of the identified at least one gesture by (i) generating a corrected set of velocity data samples based on the estimated second correction velocity factor (d_i^v) and (ii) integrating, by using the TII, the corrected set of velocity data samples.
In an embodiment, the step of filtering gravity data from the second sensor data, using the first sensor, to obtain a filtered sensor data may comprise estimating an initial gravity vector from the second sensor data, wherein the initial gravity vector is estimated in a coordinate system of the computing device; computing an axis angle representation indicating a rotation of the computing device, using the first sensor data; estimating an Euler-Rodriguez’s rotation matrix based on the axis angle representation; estimating an updated gravity vector based on the Euler-Rodriguez’s rotation matrix, wherein the updated gravity vector is estimated in the coordinate system of the computing device, and wherein the updated gravity vector comprises gravity data; and subtracting the gravity data from the second sensor data.
In an embodiment, the step of identifying the first start window (?ws?_1) and the second start window (?ws?_2) may comprise: computing a standard deviation for each window from the plurality of windows; and performing a comparison the standard deviation computed for each window with the first pre-defined threshold (S_L) and a second pre-defined threshold (S_H).
In an embodiment, the step of identifying a start sensor data sample ‘s’ between the at least one identified start window (?ws?_12) and the second start window (?ws?_2) may comprise computing the mean for the one or more sensor data samples present between the first start window (?ws?_1) and the identified at least one start window (?ws?_12); and performing a comparison of the mean and a value associated with one or more sensor data samples between the at least one identified start window (?ws?_12) and the second start window (?ws?_2).
In an embodiment, the step of identifying a second end window (?we?_2) comprises: identifying a first end window (?we?_1) after the at least one gesture is performed; and performing a comparison a standard deviation associated with the first end window (?we?_1) and the first pre-defined threshold (S_L).
In an embodiment, the step of identifying an end sensor data sample ‘e’ comprises: computing a mean for sensor data samples present between the identified first end window (?we?_1) and the second end window (?we?_2); and performing a comparison of the mean and value associated with one or more data samples from the first end window (?we?_1).
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 illustrates an exemplary block diagram of a system for estimating and correcting errors in inertial sensors according to an embodiment of the present disclosure.
FIG. 2A through 2B is an exemplary flow diagram illustrating a method for estimating and correcting errors in inertial sensors using the system of FIG. 1 according to an embodiment of the present disclosure.
FIG. 3 depicts a graphical representation of second sensor data (e.g., accelerometer data) obtained from an accelerometer according to an embodiment of the present disclosure.
FIG. 4 depicts a graphical representation of filtered sensor data (linear acceleration data) according to an embodiment of the present disclosure.
FIG. 5 depicts a graphical representation of corrected acceleration data samples according to an embodiment of the present disclosure.
FIG. 6 depicts a graphical representation of one or more velocity data samples prior to performing error correction technique according to an embodiment of the present disclosure.
FIG. 7 depicts a graphical representation of a corrected set of velocity data samples being generated after performing error correction technique on the one or more velocity data samples of FIG. 6 according to an embodiment of the present disclosure.
FIG. 8 depicts a graphical representation of a measured distance pertaining a start point and an end point of the identified at least one gesture according to an embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
Commercial grade accelerometers accumulate errors very quickly, even for very short intervals (e.g., say a couple of seconds) when integrated to estimate distance. Embodiments of the present disclosure provide systems and methods whereby the accumulated errors are modeled, and the acceleration and velocity are corrected independently. The model works on a session basis, from one stationary state to next. The device is moved in a swiping gesture along the length to be measured. For such cases, present disclosure and its embodiments associated thereof have shown to achieve errors of less than 3 cm in distances ranging from 10 cm to 5m, and less than 20cm for distances from 5m to 20m. Such accuracy figures have not yet been reported in any conventional systems and methods in these usage scenarios. The present disclosure and its embodiments associated thereof ensure that the proposed method is agnostic/independent of a computing device, with no prior training or sensor-specific parameter estimation required. The proposed method(s) provides an accurate, an alternative to laser range finders, and can work with very small lengths, and for measuring distances outdoors in bright environments (e.g., measuring land lengths).
Conventional systems and methods (e.g., existing solutions) cannot use low-grade accelerometers, for example, in mobile devices for measuring short-term lengths in 3D space. The devices where distance is directly estimated contain expensive and highly precise accelerometers with controlled motion (e.g., robotic arms). Further, the existing systems and methods employs filtering tools for example, Kalman filters to correct errors in sensor fusion models, which in turn use device and sensor-specific error parameters. This requires a training session for a device with a different sensor set. In contrast, present disclosure provides a method where distance estimation by double integrating the accelerometer values have been made accurate, practically usage, and free from drifting errors, where the accelerometers used are low-accuracy commercial grade sensors that are currently marketed in most of communication devices (e.g., mobile devices). In the present disclosure, accumulated errors in accelerometer are modeled as linear to time with zero constraints, and same methodology is implemented on velocity values. The proposed method is implemented on a session basis where each session starts from stationary state and ends in a stationary state, for example, a swipe gesture made by a user using the device along the length to be measured.
Referring now to the drawings, and more particularly to FIG. 1 through 8, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG. 1 illustrates an exemplary block diagram of a system for estimating and correcting errors in inertial sensors according to an embodiment of the present disclosure. The system 100 may also be referred as an Error Estimation and Correction System (EECS) hereinafter. In an embodiment, the system 100 includes one or more processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or more processors 104. The memory 102 comprises one or more modules 108 and the database 110. The one or more processors 104 that are hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the one or more modules 108 comprised in the memory 102 include, but are not limited to, a data processing module 108A, an orientation module 108B, a session demarcation module 108C and an error estimation and correction module 108D. In an embodiment, one or more modules 108 are implemented as at least one of a logically self-contained part of a software program, a self-contained hardware component, and/or, a self-contained hardware component, with a logically self-contained part of a software program embedded into each of the hardware component that when executed perform the above method described herein.
The database 110 may store information pertaining to raw sensor data obtained from accelerometer(s), gyroscope(s) integrated (and/or externally connected to) within a computing device (e.g., a mobile communication device). Further, the database 110 may store information pertaining to pre-processing of raw sensor data (e.g., pre-processed accelerometer data, pre-processed gyroscope data) wherein the raw sensor data comprises outliers (and/or gravity data). Furthermore, the database 110 includes information pertaining to error modeling (or estimation), and correction techniques implemented by the system 100 to obtain corrected acceleration values, corrected velocity values, and estimated measured distance pertaining to a gesture input obtained from a user on the mobile communication device. Moreover, the database 110 stores information pertaining to inputs fed to the system 100 and/or outputs generated by the system (e.g., sensor data – raw acceleration data, gravity data removed from acceleration data, Linear Temporal Cumulated Error Model (LiTCEM) corrected acceleration data, velocity data in where error is observed, and LiTCEM corrected velocity data, measured distance and the like) specific to the methodology described herein.
FIG. 2A through 2B is an exemplary flow diagram illustrating a method for estimating and correcting errors in inertial sensors using the system of FIG. 1 according to an embodiment of the present disclosure. In an embodiment, the system 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method of the present disclosure will now be explained with reference to the components of the system 100 as depicted in FIG. 1, and the flow diagram of FIG. 2. In an embodiment of the present disclosure, at step 202, the one or more hardware processors 104 continually receive, a first sensor data pertaining to a first sensor (e.g., a gyroscope) and a second sensor data pertaining to a second sensor (e.g., accelerometer), wherein the first sensor and the second sensor are associated with a computing device (e.g., a mobile communication device, and the like). FIG. 3, with reference to FIGS. 1 through 2B, depicts a graphical representation of the second sensor data (e.g., accelerometer data) obtained from an accelerometer according to an embodiment of the present disclosure. In an embodiment of the present disclosure, raw sensor data (accelerometer data) obtained from the accelerometer includes gravity data as shown in FIG. 3. In an embodiment, the raw sensor data may also include gyroscope data.
In an embodiment of the present disclosure, at step 204, the one or more hardware processors 104 filter (or remove) the gravity data from a second pre-processor sensor data (obtained from the accelerometer), using the first sensor (e.g., the gyroscope), to obtain a filtered sensor data (linear acceleration data). In the present disclosure, the step of filtering gravity data refers to removing gravity data from the accelerometer data. In an embodiment, the expression ‘filter’ may also be referred as ‘remove’ and may be interchangeably used hereinafter. In an embodiment, the expression ‘filtered sensor data’ may also be referred as ‘gravity removed sensor data’ or ‘gravity free sensor data’ or ‘linear acceleration data’ and may be interchangeably used hereinafter. Prior to removing (or filtering) gravity data from raw accelerometer sensor data, the raw sensor data may be pre-processed at a pre-defined sampling rate to obtain a first pre-processed sensor data and a second pre-processed sensor data. In an embodiment the data processing module 108A performs the step 202 and the pre-processing step to obtain a first pre-processed sensor data (e.g., a pre-processed gyroscope data) and a second pre-processed sensor data (e.g., a pre-processed accelerometer data). In an embodiment of the present disclosure, the accelerometer and gyroscope data (e.g., the first sensor data and the second sensor data) are continuously sampled at exactly 'F_s' samples per second. In case data is being available at exact timestamps, techniques like interpolation can be performed. In an embodiment, interpolation technique calculates missing data point(s) at required time-stamp ‘t’ by using linear/spline relationship which in turn use already available data points surrounding ‘t’. Let the sample numbers be indexed as 1, 2, 3 and so on after the sensors (e.g., accelerometer and gyroscope) have started, and let the length of the session (as estimated by the session demarcation module 108C) be ‘n’ seconds. When the computing device is started, user can be suggested to keep the computing device stable for a small interval of time, say 1 second. The initial gravity vector ‘G_x’ is derived from this interval as mean of accelerometer samples:
G_x=((?_(i=1)^(F_s*n)¦?V_x (i)?))/F_s (for ‘n’ seconds)
Similarly, gravity vectors G_y and G_z may be derived.
The step of removing (or filtering) the gravity data from the accelerometer data comprises: estimating an initial gravity vector from the second pre-processed sensor data, wherein the initial gravity vector is estimated in a coordinate system of the computing device; computing an axis angle representation indicating a rotation of the computing device, using the first pre-processed sensor data; estimating an Euler-Rodriguez’s rotation matrix based on the axis angle representation; estimating an updated gravity vector based on the Euler-Rodriguez’s rotation matrix, wherein the updated gravity vector is estimated in the coordinate system of the computing device, and wherein the updated gravity vector comprises gravity data; and subtracting the gravity data from the second pre-processor sensor data. In an embodiment, axis angle representation is obtained from the gyroscope sensor (or gyroscope sensor data).
In other words, after initial gravity has been estimated, SORA method SORA (e.g., refer ‘S. Stancin and S. Tomažic. Angle estimation of simultaneous orthogonal rotations from 3D gyroscope measurements. Sensors, 11(9):8536–8549, 2011’) may be employed in order to get an axis angle representation from the gyroscope, which in turn is used to estimate the Euler-Rodriguez’s rotation matrix (e.g., refer ‘J. S. Dai. Euler–rodrigues formula variations, quaternion conjugation and intrinsic connections. Mechanism and Machine Theory, 92:144–152, 2015’). This is used in tracking the gravity vector in computing device coordinates at every sample. For every gyroscope sample ?gy?_i from i=s to e, where s and e are starting and ending sample numbers of a particular session). The process is as given in below illustrative expressions by way of examples:
d=(?gy?_i+?gy?_(i-1) )*(1/F_s )
axis=d/|d|
angle=|d|
R=Rodrigues(axis,angle)
g_i=R*g_(i-1)
In the above expressions, variables in bold are 3D vectors with components in 3 axes. As mentioned above in step 202, after gravity data is estimated, it is subtracted from the raw acceleration sample at the same timestamp to get a gravity-free linear acceleration. This method produces a latency free result, which is a prime requisite in the proposed method of the present disclosure. FIG. 4, with reference to FIGS. 1-3, depicts a graphical representation of the filtered sensor data (linear acceleration data) according to an embodiment of the present disclosure. More specifically, FIG. 4 depicts acceleration data after gravity data removal. The step of removing (or filtering) gravity data 204 is performed by the orientation module 108B.
In an embodiment of the present disclosure, at step 206, the one or more hardware processors 104 split the filtered sensor data (e.g., linear acceleration data or also referred as ‘gravity-free linear acceleration data’) into a plurality of windows. Each of the plurality of windows comprises a specific length corresponding to a pre-determined time interval, and each window comprises one or more unique sensor data samples from the filtered sensor data.
In an embodiment of the present disclosure, at step 208, the one or more hardware processors 104 identify a first start window (say ?ws?_1) having standard deviation (V?ws?_1) less than a first pre-defined threshold (say S_1) and a second start window (say ?ws?_2) having standard deviation (V?ws?_2)greater than the second pre-defined threshold (say S_2). In an embodiment of the present disclosure, the step of identifying the first start window and the second start window comprises: computing a standard deviation for each window from the plurality of windows; performing a comparison the standard deviation computed for each window with the first pre-defined threshold (say S_L) and a second pre-defined threshold (say S_H). In an example embodiment, the first pre-defined threshold S_L had a value (also referred as acceleration standard deviation) of 0.15 m/?sec?^2, and the first pre-defined threshold S_H had value (also referred as acceleration standard deviation) of 0.27 m/?sec?^2 respectively while conducting experimental results for the present disclosure. In an embodiment, standard deviation is a measure of variation(s) in the sensor data (e.g., acceleration sensor data).
In an embodiment of the present disclosure, at step 210, the one or more hardware processors 104 identify at least one start window (say ?ws?_12) between the first start window (say ?ws?_1) and the second start window (say ?ws?_2), wherein the at least one start window (say ?ws?_12) comprises a V?ws?_12that is less than the first pre-defined threshold (say S_L).
In an embodiment of the present disclosure, at step 212, the one or more hardware processors 104 identify a start sensor data sample ‘s’ between the at least one identified start window (say ?ws?_12) and the second start window (say ?ws?_2), based on (i) a mean for, and (ii) a value associated with, one or more sensor data samples that are between the first start window (say ?ws?_1) and the identified at least one start window (say ?ws?_12). In other words, value that is close to the mean corresponding sensor data sample is identified as a start sensor data sample ‘s’. In an embodiment of the present disclosure, the step of identifying a start sensor data sample ‘s’ between the at least one identified start window (say ?ws?_12) and the second start window (say ?ws?_2) comprises computing the mean for the one or more sensor data samples present between the first start window (say ?ws?_1) and the identified at least one start window (say ?ws?_12), and performing a comparison of the mean and a value associated with one or more sensor data samples between the at least one identified start window (say ?ws?_12) and the second start window (say ?ws?_2).
In an embodiment of the present disclosure, for the sake of brevity, the expression ‘first start window’ (?ws?_1) may be referred as ‘first identified start window’ or ‘identified first start window’ and the above expressions may be interchangeably used. In an embodiment of the present disclosure, for the sake of brevity, the expression ‘at least one start window’ (?ws?_12) may be referred as ‘identified at least one start window’ or ‘at least one identified start window’ and the above expressions may be interchangeably used. In an embodiment of the present disclosure, for the sake of brevity, the expression ‘second start window’ (?ws?_2) may be referred as ‘second identified start window’ or ‘identified second start window’ and the above expressions may be interchangeably used.
In other words, the steps 210 and 212 are described by way of example below:
Let ?ws?_1 be the first start window after the computing device is started with a standard deviation (or acceleration standard deviation) (V?ws?_1) less than the first pre-defined threshold S_L = (V?ws?_1 )S_H
Let ?ws?_12 be the at least one identified start window such that (V?ws?_12 )
Documents
Application Documents
#
Name
Date
1
201821015528-STATEMENT OF UNDERTAKING (FORM 3) [24-04-2018(online)].pdf
2018-04-24
2
201821015528-REQUEST FOR EXAMINATION (FORM-18) [24-04-2018(online)].pdf
2018-04-24
3
201821015528-FORM 18 [24-04-2018(online)].pdf
2018-04-24
4
201821015528-FORM 18 [24-04-2018(online)]-1.pdf
2018-04-24
5
201821015528-FORM 1 [24-04-2018(online)].pdf
2018-04-24
6
201821015528-FIGURE OF ABSTRACT [24-04-2018(online)].jpg