Abstract: An apparatus (100) for object tracking comprising at least one sensor (102) configured to capture a plurality of images or videos and generate a data stream from the captured images or videos; a capturing module (104) configured to capture one or more frames from the generated data stream; a buffering module (106) configured to store the captured frames received from the capturing module; an edge detection module (108) configured to extract one or more stored frames, and detect the edge contours of an object from the stored frames; a first processing module (110) configured to generate weight values of the detected edge contours, and estimate a confidence value of the object based on the generated weight values; a second processing module (112) configured to extract one or more stored frames, and compare at least two stored frames to estimate a correlation value; and a decision making module (114) configured to simultaneously cooperate with the first processing module and the second processing module, the decision making module configured to compare the estimated confidence value with the estimated correlation value received simultaneously, and track an object based on the compared value. Fig. 1
DESC:TECHNICAL FIELD
[0001] The present invention relates to object tracking. The present invention more particularly relates to object tracking using cascading of Artificial Intelligence (AI) and non-AI techniques.
BACKGROUND
[0002] Object detection technique is generally a computer vision technique which has been widely used for identification and location of objects present within an image or a video. Thus, the correct identification of objects in a scene captured in an image/video along with the estimation of their location and orientation can be performed using object detection technique.
[0003] Object tracking relates to the estimation of the state of the target object present in a current video frame from a previous video frame. It can be considered as a process of locating moving objects over a period of time in the videos and is widely used in applications such as surveillance, human-computer interaction, medical imaging, traffic flow monitoring, and human activity recognition, etc. For real-time surveillance and data analysis, manual surveillance methods are not sufficient as the identification of the presence and location of a desired object or body within an image becomes impossible with the real-time data.
[0004] US8527445B2 titled “Apparatus, System and Method for Object Detection and Identification” describes an object detection module detects objects by matching data from one or more sensors to known data of a target object and determining one or more correlation metrics for each object. An object tracking module tracks geographic locations for detected objects over time using Subsequent data from the one or more sensors. A contextual data module determines one or more contextual indicators for detected objects based on the data from the one or more sensors. An artificial intelligence module estimates probabilities that detected objects comprise the target object based on the correlation metrics, the geographic locations, the contextual indicators, and one or more target contextual indicators associated with the target object. The artificial intelligence module estimates the probabilities using an artificial intelligence model. Such as a Bayesian network.
[0005] US8467570B2 titled “Tracking system with Fused Motion and Object Detection” describes a method for various applications such as tracking, identification, and so forth. In an application framework, model information may be developed and used to reduce the false alarm rate. With a background model, motion likelihood for each pixel, for instance, of a Surveillance image, may be acquired. With a target model, object likelihood for each pixel of the image may also be acquired. By joining these two likelihood distributions, detection accuracy may be significantly improved over the use of just one likelihood distribution for detection in applications such as tracking.
[0006] WO2016095117A1 titled “Object detection with Neural Network” describes an apparatus comprising at least one processing core and at least one memory including computer program code, the at least one memory and the computer program code being configured to, with the at least one processing core, cause the apparatus at least to run a convolutional neural network comprising an input layer arranged to provide signals to a first convolutional layer and a last convolutional layer, run a first intermediate classifier, the first intermediate classifier operating on a set of feature maps of the first convolutional layer, and decide to abort or to continue processing of a signal set based on a decision of the first intermediate classifier.
[0007] Thus, the conventional methods for object tracking employs either AI (Artificial Intelligence) or Non-AI techniques for identification of objects in a scene or a video. The Non-AI techniques are simple to understand and do not require any complex algorithms or any data to be trained before processing whereas AI techniques provide faster processing but require AI machines to be sufficiently trained. However, the implementation of combination of both AI and Non-AI techniques have been rarely used.
[0008] Therefore, there is still a need for a system/apparatus for object tracking using cascading of Artificial Intelligence (AI) and non-AI techniques.
SUMMARY
[0009] This summary is provided to introduce concepts of the invention related to an apparatus and a method for object tracking using cascading of Artificial Intelligence (AI) and non-AI techniques, as disclosed herein. This summary is neither intended to identify essential features of the invention as per the present invention nor is it intended for use in determining or limiting the scope of the invention as per the present invention.
[0010] For example, various embodiments herein may include one or more apparatuses and methods thereof. In accordance with an embodiment of the present invention, there is provided an apparatus for object tracking. The apparatus comprises at least one sensor configured to capture a plurality of images or videos and generate a data stream from the captured images or videos; a capturing module configured to cooperate with the sensor, the capturing module configured to capture one or more frames from the received input data stream; a buffering module configured to cooperate with the capturing module, the buffering module configured to store the captured frames; an edge detection module configured to cooperate with the buffering module, the edge detection module configured to: extract one or more stored frames, and detect the edge contours of an object from the stored frames; a first processing module configured to cooperate with the edge detection module, the first processing module configured to: generate weight values of the detected edge contours, and estimate a confidence value of the object based on the generated weight values; a second processing module configured to cooperate with the buffering module, the second processing module configured to: extract one or more stored frames, and compare at least two stored frames to estimate a correlation value; and a decision making module configured to simultaneously cooperate with the first processing module and the second processing module, the decision making module configured to: compare the estimated confidence value with the estimated correlation value received simultaneously, and track an object based on the compared value.
[0011] In accordance with another embodiment of the present invention, there is provided a method for object tracking. The method comprises generating, by at least one sensor, a data stream from a plurality of images or videos captured by the sensor; capturing, by a capturing module, one or more frames from the input data stream received from the sensor; storing, by a buffering module, the captured frames received from the capturing module; extracting, by an edge detection module, one or more stored frames, and detecting the edge contours of an object from the stored frames; generating, by a first processing module, weight values of detected edge contours, and estimating a confidence value of the object based on the generated weight values; extracting, by a second processing module, one or more stored frames, and comparing at least two stored frames to estimate a correlation value; and comparing, by a decision making module, the estimated confidence value with the estimated correlation value received simultaneously from the first processing module and the second processing module, and tracking an object based on the compared value.
[0012] In accordance with another embodiment of the present invention, an apparatus for object tracking, the apparatus comprising at least one sensor configured to capture a plurality of images or videos and generate a data stream from the captured images or videos; a processor configured to cooperate with sensor, the processor further configured to capture one or more frames from the generated data stream; store the captured frames; extract one or more stored frames, and detect the edge contours of an object from the stored frames; generate weight values of the detected edge contours, and estimate a confidence value of the object based on the generated weight values; compare at least two stored frames to estimate a correlation value; and compare the estimated confidence value with the estimated correlation value, and track an object based on the compared value.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[0013] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and modules/units.
[0014] Figure 1 illustrates a block diagram of an object tracking apparatus using AI and non-AI techniques, according to an exemplary implementation of the present invention.
[0015] Figure 2 illustrates a flow diagram depicting the steps involved in a method for object tracking using AI and non-AI techniques, in accordance with an exemplary implementation of the present invention.
[0016] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative methods embodying the principles of the present invention. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION
[0017] The various embodiments of the present disclosure describe about object tracking using AI and non-AI techniques. The embodiments, more particularly, describe implementation of an object tracking apparatus using cascading of both AI and phase correlation (non-AI) techniques and method thereof.
[0018] In the following description, for purpose of explanation, specific details are set forth in order to provide an understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these details. One skilled in the art will recognize that embodiments of the present invention, some of which are described below, may be incorporated into a number of systems.
[0019] However, the systems and methods are not limited to the specific embodiments described herein. Further, structures and devices shown in the figures are illustrative of exemplary embodiments of the present invention and are meant to avoid obscuring of the present invention.
[0020] It should be noted that the description merely illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present invention. Furthermore, all examples recited herein are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
[0021] In accordance with an embodiment of the present invention, there is provided an apparatus for object tracking. The apparatus at least one sensor configured to capture a plurality of images or videos and generate a data stream from the captured images or videos; a capturing module configured to cooperate with the sensor, the capturing module configured to capture one or more frames from the received input data stream; a buffering module configured to cooperate with the capturing module, the buffering module configured to store the captured frames; an edge detection module configured to cooperate with the buffering module, the edge detection module configured to: extract one or more stored frames, and detect the edge contours of an object from the stored frames; a first processing module configured to cooperate with the edge detection module, the first processing module configured to: generate weight values of the detected edge contours, and estimate a confidence value of the object based on the generated weight values; a second processing module configured to cooperate with the buffering module, the second processing module configured to: extract one or more stored frames, and compare at least two stored frames to estimate a correlation value; and a decision making module configured to simultaneously cooperate with the first processing module and the second processing module, the decision making module configured to: compare the estimated confidence value with the estimated correlation value received simultaneously, and track an object based on the compared value.
[0022] In an aspect, the edge detection module is configured to detect the edge contours of the object by scanning at least one or more pixels in horizontal and vertical direction of each video frame.
[0023] In an aspect, the first processing module is configured to perform an AI (Artificial Intelligence) technique on the detected edge contours of the object to generate the weighted values and estimate the confidence value.
[0024] In an aspect, the second processing module is configured to perform a phase correlation technique on at least two or more successive frames; and wherein the correlation value is a normalized cross-correlation coefficient value estimated by measuring similarity between at least two or more successive frames.
[0025] In an aspect, the second processing module is configured to perform a phase correlation technique on at least two or more successive frames; and wherein the correlation value is a normalized cross-correlation coefficient value estimated by measuring similarity between at least two or more successive frames.
[0026] In an aspect, the decision making module is configured to compare the estimated confidence value and the normalized cross-correlation value; and select an optimal value based on the comparison to perform object tracking by the technique corresponding to the optimal value.
[0027] In accordance with an embodiment of the present invention, there is provided a method for object tracking, the method comprising: generating, by at least one sensor, a data stream from a plurality of images or videos captured by the sensor; capturing, by a capturing module, one or more frames from the generated data stream received from the sensor; storing, by a buffering module, the captured frames received from the capturing module; extracting, by an edge detection module, one or more stored frames, and detecting the edge contours of an object from the stored frames; generating, by a first processing module, weight values of detected edge contours, and estimating a confidence value of the object based on the generated weight values; extracting, by a second processing module, one or more stored frames, and comparing at least two stored frames to estimate a correlation value; and comparing, by a decision making module, the estimated confidence value with the estimated correlation value received simultaneously from the first processing module and the second processing module, and tracking an object based on the compared value.
[0028] In an aspect, the step of detecting the edge contours of the object comprises scanning at least one or more pixels in horizontal and vertical direction of each video frame.
[0029] In an aspect, the step of generating the weighted values and estimating the confidence values comprises performing, by the first processing module, an AI (Artificial Intelligence) technique on the detected edge contours of the object.
[0030] In an aspect, the step of estimating the correlation value comprises performing, by the second processing module, a phase correlation technique on at least two or more successive frames; an. measuring similarity between the at least two or more successive frames to estimate a normalized cross-correlation coefficient value as the correlation value.
[0031] In an aspect, the method further comprises the steps of comparing, by the decision making module, the estimated confidence value and the normalized cross-correlation value; and selecting an optimal value based on the comparison to perform object tracking by the technique corresponding to the optimal value.
[0032] In accordance with an embodiment, an apparatus for object tracking, the apparatus comprising at least one sensor configured to capture a plurality of images or videos and generate a data stream from the captured images or videos; a processor configured to cooperate with sensor, the processor further configured to capture one or more frames from the generated data stream; store the captured frames; extract one or more stored frames, and detect the edge contours of an object from the stored frames; generate weight values of the detected edge contours, and estimate a confidence value of the object based on the generated weight values; compare at least two stored frames to estimate a correlation value; and compare the estimated confidence value with the estimated correlation value, and track an object based on the compared value.
[0033] Figure 1 illustrates a block diagram depicting an apparatus (100) for object tracking using cascading of artificial intelligence (AI) and non-AI (phase correlation) techniques, according to an implementation of the present invention.
[0034] An apparatus for object tracking using cascading of artificial intelligence (AI) and phase correlation technique (hereinafter referred to as “apparatus”) (100) includes a sensor (102), a capturing module (104), a buffering module (106), an edge detection module (108), a first processing module (110), a second processing module (112), a decision making module (114), and a display unit (116).
[0035] In an embodiment, the sensor (102) is an Electro Optic (EO) sensor such as but not limited to a camera, an infra-red sensor and the like (not particularly shown). The sensor (102) is configured to capture a plurality of images and/or videos from a pre-defined area. In one embodiment, the pre-defined area includes a ground, an indoor area, an outdoor area, and any similar open and/or closed types of the area(s). In another embodiment, the sensor (102) includes a plurality of cameras, infra-red sensors, etc. which are installed in the pre-defined area(s). Each camera/ infra-red sensor is configured to capture the images and/or videos from its pre-determined range.
[0036] In another embodiment, the sensor (102) is configured to sense the scenes from the captured data of the images and/or videos and consider the captured data as an input. The capturing module (104) is configured to cooperate with the sensor (102) to receive the data input. The capturing module (104) is configured to capture one or more frames from the received data. In an embodiment, the capturing module (104) is configured to capture the frames at a pre-defined sensor frame rate.
[0037] The buffering module (106) is configured to cooperate with the capturing module (104) to store the captured frames for further processing.
[0038] The edge detection module (108) is configured to cooperate with the buffering module (106) to extract one or more stored frames. In an embodiment, the edge detection module (108) is configured to detect the edge contours of an object from the stored frames received from the buffering module (106). In an embodiment, the edge detection module (108) is configured to detect the edge contours of one or more objects from the stored frames. In an embodiment, the edge detection module (108) is configured to detect the edge contours of each object by scanning the pixels in an image of the stored frames. The edge detection module (108) is further configured to detect the edge contours in horizontal direction by estimating the difference between two successive pixels, xi and xi+1.
[0039] If (xi+1-xi> Threshold), then xi+1= 255 else xi+1= 0;
[0040] The edge detection module (108) is further configured to detect the edge contours in vertical direction by estimating the difference between two successive pixels, yi and yi+1.
[0041] If (yi+1-yi> Threshold), then yi+1= 255 else yi+1=0;
[0042] In an embodiment, the present invention uses two processing modules for object detection and tracking wherein, the first processing module (110) employs AI (Artificial Intelligence) model/technique, and the second processing module (112) employs a non-AI technique. In an embodiment, the first processing module (110) is trained with the detected edge contours of each object in a frame by using a Single Shot Detection (SSD) model. The first processing module (110) is configured to generate weight values of the detected edge contours after the first processing module (110) is trained.
[0043] The first processing module (110) is configured to estimate the confidence values/accuracy values of each object in the frame based on the generated weight values and is further configured to validate the AI model, for example the SSD model, by using the estimated confidence/accuracy values. The first processing module (110) is configured to detect the objects in the frames during testing and is further configured to label a confidence value across each detected object. In an example, a user can select a particular object in a frame for tracking with respect to the confidence value labelled by the first processing module (110). The confidence value is denoted by symbol ‘a’ (alpha).
[0044] The accuracy of the AI model employed by the first processing module (110) fits with respect to the number of epochs or iterations. The model should neither over fit nor under fit which may lead to wrong results. The testing of AI model starts once the training is finished.
[0045] The second processing module (112) is configured to cooperate with the buffering module (106) to extract one or more stored frames. The second processing module (112) is further configured to compare at least two stored frames to estimate a correlation value. Typically, the second processing module (112) employs a Phase correlation technique as a non-Artificial intelligence (non-AI) technique to estimate a normalised cross-correlation coefficient value as the correlation value, by measuring similarity between two successive frames. The normalised cross-correlation coefficient value is denoted by symbol ‘?’ (Rho). The ‘?’ (Rho) value varies from 0 to 1.
[0046] The decision making module (114) is configured to simultaneously cooperate with the first processing module (110) and the second processing module (112) and is configured to compare the estimated confidence value (a) with the estimated correlation value (?) received simultaneously. The decision making module (114) is further configured to track an object based on the comparison between the estimated confidence value (a) with the estimated correlation value (?). For example, the decision making module (114) chooses a better technique from the AI and non-AI techniques for object tracking based on the estimated ‘a’ and ‘?’ values. These values are then compared in real time with a sensor frame rate and a switch over of technique happens in between the frames (i.e., AI to non-AI or vice-versa).
[0047] A video display unit (116) is configured to cooperate with the decision making module (114) to display the object based on the technique chosen by the decision making module (114).
[0048] Table 1 illustrates the estimated confidence values (a) and normalized cross-correlation coefficient values (?) along with the preferred technique used for decision making selected for better object tracking, by the decision making module (114), according to an implementation of the present invention.
Table 1
S.No Confidence value / Accuracy value of AI Trained Model (a) Normalised
Cross-Correlation Coefficient (?) Preferred Technique selected for object tracking
1. 90% 80% (0.8) AI Technique
2. 70% 90% (0.9) Phase Correlation
3. 60% 50% AI Technique
4. >50% (if a = ?) >50% (if a = ?) AI Technique
5. If object is not trained Since its rule-based, no need of training Phase Correlation
6. 50% 50% Phase Correlation since confidence value <= 50% is not a better AI model
[0049] As depicted in Table 1, the confidence value is denoted by a symbol ‘a’ and a normalised cross-correlation coefficient value is denoted by a symbol ‘?’. The decision making module (114) selects an optimal value based on the comparison between the values of ‘a’ and ‘?’ to perform object tracking by the technique corresponding to the optimal value.
[0050] There are 6 cases listed out as per Table 1.
[0051] Case 1: a = 90%, ? = 80%, then AI technique is the suitable technique preferred for object tracking.
[0052] Case 2: a = 70%, ? = 90%, then Phase correlation is the preferred technique for object tracking.
[0053] Case 3: a = 60%, ? = 50%, then AI technique is the better approach for object tracking.
[0054] Case 4: If both a and ? are equal and greater than 50%, then AI technique is preferred for object tracking.
[0055] Case 5: Suppose if object is not trained with the AI model nor present in the database, then Phase correlation is preferred for object tracking since it is a conventional or rule-based technique (no need of training).
[0056] Case 6: Suppose if a = 50%, ? = 50% then Phase correlation is preferred for object tracking since confidence value less than or equal to 50% is not a good AI model.
[0057] Figure 2 illustrates a flow diagram (200) depicting the steps involved in a method for object tracking using AI and non-AI techniques, according to an exemplary implementation of the present invention.
[0058] In Figure 2, the flow diagram (200) starts at step (202), where a capturing module (104) is configured to capture frames from a data stream received from a sensor, wherein the sensor is configured to capture a plurality of images and/or videos and generate the data stream. The captured frames are stored in a buffering module (106) for further processing, as shown at step (204). The buffering module (106) can be accessed by the edge detection module (108) and the second processing module (112). At step (206), the edge detection module (108) extracts one or more stored frames from the buffering module (106) and detects the edge contours of an object from the stored frames. At step (208), the first processing module (110) generates weight values of the detected edge contours and estimates a confidence value of the object based on the generated weight values. In an embodiment, the first processing module (110) is trained with the detected edge contours of each object in a frame by using a Single Shot Detection (SSD) model. Typically, the first processing module (110) is configured to generate weight values of the detected edge contours after the first processing module (110) is trained.
[0059] At a step 210, the first processing module (110) is validated by testing by estimating the confidence values/accuracy values of each object in the frame based on the generated weight values. In an embodiment, the first processing module (110) is configured to detect the objects in the frames during testing and is further configured to label a confidence value across each detected object.
[0060] At step 212, the second processing module (112) performs phase correlation technique and extracts one or more stored frames. The second processing module (112) is configured to measure the similarity between at least two or more successive frames to estimate a normalised cross-correlation coefficient value, as shown in step 214.
[0061] Finally, at step 216, a decision-making module (216) is configured to compare the estimated confidence value with the normalized cross-correlation coefficient value, and an optimal value is selected based on the comparison to perform object tracking by the technique corresponding to the optimal value.
[0062] Thus, based on the above, as illustrated in Figure 2, the method for object tracking comprises the following steps: generating, by at least one sensor, a data stream from a plurality of images and/or videos captured by the sensor; capturing, by the capturing module (104), one or more frames from the generated data stream received from the sensor;
storing, by the buffering module (106), the captured frames received from the capturing module (104); extracting, by the edge detection module (108), one or more stored frames, and detecting the edge contours of an object from the stored frames; generating, by the first processing module (110), weight values of detected edge contours, and estimating a confidence value of the object based on the generated weight values; extracting, by the second processing module (112), one or more stored frames, and comparing at least two stored frames to estimate a correlation value; and comparing, by the decision making module (114), the estimated confidence value with the estimated correlation value received simultaneously from the first processing module (110) and the second processing module (112), and tracking an object based on the compared value.
[0063] The aforesaid method is typically performed by the apparatus for object tracking as illustrated in Figure 1. In exemplary embodiments, the apparatus may be implemented through at least a processor such as microcontroller, microprocessor, digital signal processor (DSP), field programmable gate array (FPGA), application specific integrated circuit (ASIC), or other like electronic processing devices. In other exemplary embodiments, the processor may be a computing device such as a computer, a tablet, etc.
[0064] In an exemplary embodiment, the processor is configured to cooperate with a sensor (102), and capture frames from a data stream received from the sensor (102), wherein the sensor is configured to capture a plurality of images and/or videos and generate the data stream. The processor is further configured to capture one or more frames from the generated data stream; store the captured frames; extract one or more stored frames, and detect the edge contours of an object from the stored frames; generate weight values of the detected edge contours, and estimate a confidence value of the object based on the generated weight values; compare at least two stored frames to estimate a correlation value; and compare the estimated confidence value with the estimated correlation value, and track an object based on the compared value.
[0065] In another exemplary embodiment, the processor may comprise the above described modules (104 to 114), where each module would be configured to perform its respective operations.
[0066] In the present invention, two techniques are available for better object tracking namely AI and phase correlation (non-AI). The AI technique mainly depends on a model to build, train and test the model. The phase correlation technique measures the similarity between two successive frames and estimates the normalised cross-correlation coefficient. Thus, the present invention advantageously allows use of any one technique and how to switch over the technique from one to another to perform better object tracking in real time.
[0067] The foregoing description has been set merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the substance of the invention may occur to person skilled in the art, the invention should be construed to include everything within the scope of the invention.
,CLAIMS:
1. An apparatus (100) for object tracking, the apparatus (100) comprising:
at least one sensor (102) configured to capture a plurality of images or videos and generate a data stream from the captured images or videos;
a capturing module (104) configured to cooperate with the sensor (102), the capturing module (104) configured to capture one or more frames from the generated data stream;
a buffering module (106) configured to cooperate with the capturing module (104), the buffering module (106) configured to store the captured frames;
an edge detection module (108) configured to cooperate with the buffering module (106), the edge detection module (108) configured to:
extract one or more stored frames, and
detect the edge contours of an object from the stored frames;
a first processing module (110) configured to cooperate with the edge detection module (108), the first processing module (110) configured to:
generate weight values of the detected edge contours, and
estimate a confidence value of the object based on the generated weight values;
a second processing module (112) configured to cooperate with the buffering module (106), the second processing module (112) configured to:
extract one or more stored frames, and
compare at least two stored frames to estimate a correlation value; and
a decision making module (114) configured to simultaneously cooperate with the first processing module (110) and the second processing module (112), the decision making module (114) configured to:
compare the estimated confidence value with the estimated correlation value received simultaneously, and track an object based on the compared value.
2. The system as claimed in claim 1, wherein the edge detection module (108) is configured to:
detect the edge contours of the object by scanning at least one or more pixels in horizontal and vertical direction of each video frame.
3. The system as claimed in claim 1, wherein the first processing module (110) is configured to perform an AI (Artificial Intelligence) technique on the detected edge contours of the object to generate the weighted values and estimate the confidence value.
4. The system as claimed in claim 1, wherein the second processing module (112) is configured to:
perform a phase correlation technique on at least two or more successive frames; and
wherein the correlation value is a normalized cross-correlation coefficient value estimated
by measuring similarity between at least two or more successive frames.
5. The system as claimed in claims 1-4, where the decision making module (114) is configured to:
compare the estimated confidence value and the normalized cross-correlation value; and
select an optimal value based on the comparison to perform object tracking by the technique corresponding to the optimal value.
6. A method for object tracking, the method comprising:
generating, by at least one sensor, a data stream from a plurality of images or videos captured by the sensor;
capturing, by a capturing module (104), one or more frames from the generated data stream received from the sensor;
storing, by a buffering module (106), the captured frames received from the capturing module (104);
extracting, by an edge detection module (108), one or more stored frames, and detecting the edge contours of an object from the stored frames;
generating, by a first processing module (110), weight values of detected edge contours, and estimating a confidence value of the object based on the generated weight values;
extracting, by a second processing module (112), one or more stored frames, and comparing at least two stored frames to estimate a correlation value; and
comparing, by a decision making module (114), the estimated confidence value with the estimated correlation value received simultaneously from the first processing module (110) and the second processing module (112), and tracking an object based on the compared value.
7. The method as claimed in claim 6, wherein the step of detecting the edge contours of the object comprises scanning at least one or more pixels in horizontal and vertical direction of each video frame.
8. The method as claimed in claim 7, wherein the step of generating the weighted values and estimating the confidence values comprises performing, by the first processing module (110), an AI (Artificial Intelligence) technique on the detected edge contours of the object.
9. The method as claimed in claim 6, wherein the step of estimating the correlation value comprises:
performing, by the second processing module (112), a phase correlation technique on at least two or more successive frames; and
measuring similarity between the at least two or more successive frames to estimate a normalized cross-correlation coefficient value as the correlation value.
10. The method as claimed in claims 6-9, further comprising:
comparing, by the decision making module (114), the estimated confidence value and the normalized cross-correlation value; and
selecting, by the decision making module (114), an optimal value based on the comparison to perform object tracking by the technique corresponding to the optimal value.
11. An apparatus (100) for object tracking, the apparatus (100) comprising:
at least one sensor (102) configured to capture a plurality of images or videos and generate a data stream from the captured images or videos;
a processor configured to cooperate with sensor (102), the processor further configured to:
capture one or more frames from the generated data stream;
store the captured frames;
extract one or more stored frames, and detect the edge contours of an object from the stored frames;
generate weight values of the detected edge contours, and estimate a confidence value of the object based on the generated weight values;
compare at least two stored frames to estimate a correlation value; and
compare the estimated confidence value with the estimated correlation value, and track an object based on the compared value.
| # | Name | Date |
|---|---|---|
| 1 | 202241019729-PROVISIONAL SPECIFICATION [31-03-2022(online)].pdf | 2022-03-31 |
| 2 | 202241019729-FORM 1 [31-03-2022(online)].pdf | 2022-03-31 |
| 3 | 202241019729-DRAWINGS [31-03-2022(online)].pdf | 2022-03-31 |
| 4 | 202241019729-Proof of Right [14-06-2022(online)].pdf | 2022-06-14 |
| 5 | 202241019729-FORM-26 [14-06-2022(online)].pdf | 2022-06-14 |
| 6 | 202241019729-Correspondence_Form1_20-06-2022.pdf | 2022-06-20 |
| 7 | 202241019729-FORM 3 [19-09-2022(online)].pdf | 2022-09-19 |
| 8 | 202241019729-ENDORSEMENT BY INVENTORS [19-09-2022(online)].pdf | 2022-09-19 |
| 9 | 202241019729-DRAWING [19-09-2022(online)].pdf | 2022-09-19 |
| 10 | 202241019729-CORRESPONDENCE-OTHERS [19-09-2022(online)].pdf | 2022-09-19 |
| 11 | 202241019729-COMPLETE SPECIFICATION [19-09-2022(online)].pdf | 2022-09-19 |
| 12 | 202241019729-POA [04-10-2024(online)].pdf | 2024-10-04 |
| 13 | 202241019729-FORM 13 [04-10-2024(online)].pdf | 2024-10-04 |
| 14 | 202241019729-AMENDED DOCUMENTS [04-10-2024(online)].pdf | 2024-10-04 |
| 15 | 202241019729-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |