Abstract: ABSTRACT “METHOD FOR CLASSIFYING A TARGET FROM MICRO-DOPPLER SIGNATURES OF TARGET IN REAL-TIME AND SYSTEM THEREOF” The present invention relates radar signal processing and classification of a radar targets in real-time. In one embodiment, the method for classifying a target from micro-Doppler signatures of the target in real-time comprising: determining a micro-Doppler signature of the target from an echo signals received from the target (103), accumulating the determined micro-Doppler signatures over a pre- determined time period (T) (104), converting the accumulated micro-Doppler signatures into a pixel image of the target with valid pixel values (105), comparing the pixel image of the target with one or more pre-stored images in a catalogue (106), determining a class of the pixel image of the target based on the comparison (107) and in the event of a successful determination, assigning the class to the target (109). Figure 1 (for publication)
Claims:We Claim:
1. A method for classifying a target from micro-Doppler signatures of the target in real-time, the method comprising:
determining a micro-Doppler signature of the target from an echo signals received from the target (103);
accumulating the determined micro-Doppler signatures over a pre- determined time period (T) (104);
converting the accumulated micro-Doppler signatures into a pixel image of the target with valid pixel values (105);
comparing the pixel image of the target with one or more pre-stored images in a catalogue (106);
determining a class of the pixel image of the target based on the comparison (107); and
in the event of a successful determination, assigning the class to the target (109).
2. The method as claimed in claim 1, wherein in the event of an unsuccessful determination, creating a new class of the pixel image of the target and assigning the new class to the target (108).
3. The method as claimed in claim 1, wherein the method, prior to the step of determining the micro-doppler signature, comprises:
receiving echo signals reflected from the target in response to electromagnetic waves transmitted by a radar towards the target; and
detecting the target based on a constant false alarm rate (CFAR) value derived from the echo signals.
4. The method as claimed in claim 1, wherein the micro-doppler signature is selected from short-time Fourier transform (STFT), bispectrum and bicoherence, wherein a window length of the short-time Fourier transform (STFT) is variable.
5. The method as claimed in claim 1, wherein the method comprises creating a catalogue with plurality of pre-stored images corresponding to their respective micro-Doppler signatures of respective targets.
6. The method as claimed in claim 1, wherein the new class is created using a deep neural architecture model.
7. The method as claimed in claim 5, wherein the method comprises:
training the deep neural architecture model;
testing the trained deep neural architecture model with one or more pre-stored images in the catalogue; and
creating the new class in the event of failure of the deep neural architecture model during testing and training the deep neural architecture model with the new class.
8. A system for classifying a target from micro-Doppler signatures of the target in real-time, the system comprising at least a processing device configured to:
determine a micro-Doppler signature of the target from echo signals of received from the target;
accumulate the determined micro-Doppler signatures over a pre-determined time period (T);
convert the accumulated micro-Doppler signatures into a pixel image of the target with valid pixel values;
compare the pixel image of the target with one or more pre-stored images in a catalogue;
determine a class of the pixel image of the target based on the comparison; and
in the event of a successful determination, assign the class to the target.
9. The system as claimed in claim 8, wherein in the event of an unsuccessful determination, create a new class of the pixel image of the target and assigning the new class to the target.
10. The system as claimed in claim 8, wherein the processing device is configured to:
receive echo signals reflected from the target in response to electromagnetic waves transmitted by a radar towards the target; and
detect the target based on a constant false alarm rate (CFAR) value derived from the echo signals.
11. The system as claimed in claim 8, wherein the processing device is further configured to create the new class through a deep neural network.
12. The system as claimed in claim 11, wherein the processing device is further configured to:
train the deep neural network model;
test the trained deep neural network model with one or more pre-stored images in the catalogue; and
create the new class in the event of failure of the deep neural network model during testing and train the neural network model with the new class.
Dated this 31st day of March, 2022
Bharat Electronics Limited
(By their Agent)
(D. Manoj Kumar) (IN/PA 2110)
KRISHNA & SAURASTRI ASSOCIATES LLP
, Description:FORM – 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(SEE SECTION 10, RULE 13)
METHOD FOR CLASSIFYING A TARGET FROM MICRO-DOPPLER SIGNATURES OF TARGET IN REAL-TIME AND SYSTEM THEREOF
BHARAT ELECTRONICS LIMITED
HAVING ITS ADDRESS AT
OUTER RING ROAD, NAGAVARA,
BANGALORE,
KARNATAKA-560045
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
TECHNICAL FIELD OF THE INVENTION
[0001] The present disclosure/invention relates generally to a field of signal processing and more particularly, relates to radar signal processing and classification of a radar targets in real-time.
BACKGROUND OF THE INVENTION
[0002] Generally, electromagnetic waves transmitted by a radar towards a target are reflected from the target as echo signals which are used to extract the target characteristics. In case of a moving target, during the reception of the echo signals, the radar carrier frequency may be shifted due to Doppler effect induced by the moving target. Additionally, if the target or any part of the target undergoes micro-motions such as mechanical vibrations or rotations, further frequency modulation or Doppler modulations are induced on the received echo signals, referred to micro- Doppler effect. These Doppler modulations become a distinctive signature of the target which typically provides the identity of the target and is used to classify the target.
[0003] The use of various micro-Doppler measurement tools such as STFT, bispectrum and bicoherence has been popular for radar target classification in recent years. The use of various targets and waveforms in relation to these tools has helped in expanding the scope of knowledge regarding the effectiveness of the tools’ capability in classification of the targets. However, the success of the classification process depends on the quality of the available catalogue of the micro-Doppler signatures corresponding to various targets and waveforms. The operator of the radar must be well acquainted with the catalogue of the targets, radar waveforms and micro-Doppler signatures. Based on the knowledge of the catalogue, the operator would analyse the observed micro-Doppler signatures in the radar display and conclude the types and classes of the targets. This kind of manual or visual analysis may not be practical in many situations where the classification must be done as quickly as possible. The quality of the classification depends on the visual and analytical capability of the individual operator and may vary from person to person. Therefore, a method which can perform this task independently of the operator and with acceptable accuracy in real-time is needed.
[0004] One of the information technology streams, machine intelligence, analyzes how to build an intelligent system that can make a decision based on input parameters. Achievement in machine intelligence over the last 20 years has made it conceivable to create an automatic decision-making system. One example of these systems is an automated monitoring system that can identify, determine the class, and find the parameters of the objects in range.
[0005] The micro-Doppler content appears in radar returns because of the Doppler effect produced by micro-motions. The time-varying response of micro-motions forms a micro-Doppler signature in the form of multi-component frequency modulated signals. Micro-Doppler effect begins with the movement of a non-rigid object, when the global change consists of many local micro-motions.
[0006] Therefore, there is a need in the art with a method for classifying a target from micro-doppler signatures of target in real-time and system thereof without handling too much complexity.
SUMMARY OF THE INVENTION
[0007] An aspect of the present invention is to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below.
[0008] Accordingly, in one aspect of the present invention relates to a method for classifying a target from micro-Doppler signatures of the target in real-time, the method comprising: determining a micro-Doppler signature of the target from an echo signals received from the target (103), accumulating the determined micro-Doppler signatures over a pre- determined time period (T) (104), converting the accumulated micro-Doppler signatures into a pixel image of the target with valid pixel values (105), comparing the pixel image of the target with one or more pre-stored images in a catalogue (106), determining a class of the pixel image of the target based on the comparison (107) and in the event of a successful determination, assigning the class to the target (109) and in the event of an unsuccessful determination, creating a new class of the pixel image of the target and assigning the new class to the target (108).
[0009] Another aspect of the present invention relates to a system for classifying a target from micro-Doppler signatures of the target in real-time, the system comprising at least a processing device configured to: determine a micro-Doppler signature of the target from echo signals of received from the target, accumulate the determined micro-Doppler signatures over a pre-determined time period (T), convert the accumulated micro-Doppler signatures into a pixel image of the target with valid pixel values, compare the pixel image of the target with one or more pre-stored images in a catalogue, determine a class of the pixel image of the target based on the comparison and in the event of a successful determination, assign the class to the target and in the event of an unsuccessful determination, create a new class of the pixel image of the target and assigning the new class to the target.
[0010] Other aspects, advantages, and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses exemplary embodiments of the invention.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[0011] The detailed description is described with reference to the accompanying figures.
[0012] Figure 1 shows a flowchart of the method involving the reception of the target echo by the radar and the processing of the echo with respect to the catalogue of the classifier according to an exemplary implementation of the present disclosure/ invention.
[0013] Figure 2 shows the image signatures for the different classes according to an exemplary implementation of the present disclosure/invention.
[0014] Figure 3 shows a flowchart of the method of comparing the test image obtained from the radar with the images available in the catalogue according to an exemplary implementation of the present disclosure/invention.
[0015] Figures 4 and 5 shows the flowchart of the testing DCNN model according to an exemplary implementation of the present disclosure/invention.
[0016] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative methods embodying the principles of the present disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION OF THE INVENTION
[0017] The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the invention as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
[0018] The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention are provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
[0019] It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
[0020] By the term “substantially” it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
[0021] Figures discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way that would limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged communications system. The terms used to describe various embodiments are exemplary. It should be understood that these are provided to merely aid the understanding of the description, and that their use and definitions in no way limit the scope of the invention. Terms first, second, and the like are used to differentiate between objects having the same terminology and are in no way intended to represent a chronological order, unless where explicitly stated otherwise. A set is defined as a non-empty set including at least one element.
[0022] In the following description, for purpose of explanation, specific details are set forth in order to provide an understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure may be practiced without these details. One skilled in the art will recognize that embodiments of the present disclosure, some of which are described below, may be incorporated into a number of systems.
[0023] However, the systems and methods are not limited to the specific embodiments described herein. Further, structures and devices shown in the figures are illustrative of exemplary embodiments of the presently disclosure and are meant to avoid obscuring of the presently disclosure.
[0024] It should be noted that the description merely illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present invention. Furthermore, all examples recited herein are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
[0025] The various embodiments of the present invention describe about a method for classifying a target from micro-doppler signatures of the target and a system for the same. The micro-doppler signatures of the target are analyzed without human intervention and classified in real time thereby contributing to the smartness of a radar.
[0026] A primary scheme for the micro-Doppler study is the time-frequency representation. Now, a traditional spectrogram named as the squared modulus of the short-time Fourier transform (STFT) is a broadly developed tool utilized for the time-frequency study of micro-Doppler information. The spectrogram is computed using a sliding window for truncating a received signal by the sequence of short-time divisions (messages) and Fourier transform related to that short-time segmented message. A more substantial window length gives higher frequency resolution in the STFT, but, as a result, the more inadequate temporal resolution appears at the same point in time. Therefore, the window length for computing the spectrogram used by the short-time Fourier transform (STFT) in the present invention is variable (sliding).
[0027] The Deep Convolutional Neural Network (DCNNs) for the classification of micro-Doppler signatures images estimated by the radar. Notwithstanding machine learning techniques having been applied widely in radar signature image classification in the last decades, their performance could not be fully implemented in the radar area due to the deficiency of the training examples. It is a critical task because of the very high labor work and financial costs associated with taking radar images. However, due to recent advances in radar area it is possible to collect high training data. This allows us to train an authentic classification model from the authentic data instead of the data generated using transfer learning and Generative Adversarial Networks (GANs).
[0028] Hence, in the proposed work, the present invention provides the method of DCNNs to classify the 8 (in general, N) number of micro-Doppler signatures. The short time Fourier transform (STFT) signatures images of 8 different classes are used for training purpose. The presented method is used to classify the 8 different activities obtained from the Doppler radar. As per the results, the application of CNNs modifies the accuracy of classification task.
[0029] Deep learning techniques can be used to improve the recognition accuracy of different radar target signatures as a deep neural network consists of multiple layers that help to detect features and make a classification boundary. In one embodiment, the deep neural network consists of 4 – 12 layers which are programmably selectable which help to detect features and make a classification boundary. In the case of class label classification, convolutional neural networks are widely used in literature. Seeing the popularity of CNN, some deep learning frameworks are developed. However, the performance of the models is low because of inefficient preprocessing and architecture.
[0030] The DCNNs is proposed to classify distinct human action using micro-Doppler signatures. A very similar application of gait detection of multiple humans was carried out. Besides, there was an attempt to discriminate a large number of classes of aided and unaided human motion for healthcare purposes. However, the proposed method shows how to obtain training data for the model creation without handling too much complexity.
[0031] In one embodiment, the present invention relates to a method which classifies radar targets by analyzing the micro-Doppler signatures generated by the targets. The shape and periodicity of the micro-Doppler signature corresponding to a radar target depends on the target’s RCS (Radar Cross Section), movement, the radar waveform etc. A visual inspection of the micro-Doppler signature may be enough to know the nature or class of the target based on the catalogue of the micro-Doppler signatures of various targets previously recorded. But in real-time applications visual inspection may not always be possible. Therefore, a method is necessary which will receive the micro-Doppler signature and the catalogue in real-time and analyze the signature in real-time. The method should finally point out the class of the radar target as the result of its analysis.
[0032] In one embodiment, the present invention provides a radar signal processing and methods for the intelligent classification of the radar targets in real-time.
[0033] In one embodiment, the radar signals received may be from a through-wall radar or a wall piercing radar.
[0034] In one embodiment, the present invention relates to a scalable and real-time method of classification of radar target based on their micro-Doppler signatures comprising of the following modules -
a) Reception of the micro-Doppler signature in real-time
b) Conversion of the signature into images with valid pixel values
c) The creation of catalogue of such images with respect to various targets and radar waveforms
d) The analysis of a test image with the help of the catalogue
e) Determination of the class of the test target based on the analysis
[0035] In one embodiment, the present invention provides a method for reception of the micro-Doppler signature in real-time. The present invention method doesn’t affect the performance of the radar as the reception and recording (if necessary) of the signatures are done in hardware independent of the signal processing chain.
[0036] In one embodiment, a method for the conversion of the received and recorded micro-Doppler signatures into images with pixel values which may be processed consequently.
[0037] In one embodiment, the present invention provides a method for the creation of a catalogue for the images with respect to different types of targets and radar waveforms. This catalogue will be essential for the target classification method as a reference to be used for declaring the type or class of the radar target.
[0038] In one embodiment, the present invention provides a method for analyzing a test image with the help of the catalogue. The analysis involves in finding the item in the catalogue having the maximum similarity with the test image. This similarity may be found using established classifier methods.
[0039] In one embodiment, the present invention provides a method to determine the class or type of the radar target based on the above analysis. The similarity found in the analysis between the test image and the catalogue item helps estimating the class of the radar target under test. If the similarity with each one of the existing classes is below 50%, then a new class is created.
[0040] In one embodiment, the objective of the present invention is to analyse the micro-Doppler signatures of the radar targets without human intervention and to classify the targets in real-time.
[0041] The present invention method disclosed herein is based on some basic assumptions. These assumptions have been derived from the study of the existing literature on the methods used to obtain the micro-Doppler signatures of radar targets. These assumptions are as follows:
A1. Each of the methods used for obtaining micro-Doppler signatures of radar targets produces signatures which are periodic in nature with respect to time and the micro-motion in the targets.
A2. The micro-Doppler signatures are created and made available to the method in real-time while the operation of the radar is not hampered in terms of performance.
A3. The signatures corresponding to motions other than micro-motion or those corresponding to stationary targets are similar in nature and therefore are avoidable.
[0042] Figure 1 shows a flowchart of the method involving the reception of the target echo by the radar and the processing of the echo with respect to the catalogue of the classifier according to an exemplary implementation of the present disclosure/ invention.
[0043] The figure shows a flowchart of the method involving the reception of the target echo by the radar and the processing of the echo with respect to the catalogue of the classifier. In FIG. 1, the radar target echo is received in the module 101. In the module 102, the detection of the radar targets is performed using CFAR (Constant False Alarm Rate) . In the module 103, the calculation of the relevant micro-Doppler signature measurement tools is carried out. These tools include STFT (Short Time Fourier Transform) and bicepstrum (bispectrum and bicoherence). These tools generate signatures only when the output of these measurements are accumulated over time. Hence, this accumulation is performed in the module 104. The time period over which the accumulation is done is . In the module 105, the signature is transformed into a pixel image which may be used later in the method for classifying the signatures. The classification of the targets is achieved by the comparison of the test image with the images in the catalogue. This comparison is performed in the module 106 where the neural network techniques elaborated in the FIG. 3 are employed. Once a new test image is generated in the system, the issue of updating the existing catalogue comes into the picture as the test image may come from a class or type of target which may not have been encountered by the radar before. The comparison of the test image with the items in the catalogue performed in the previous steps in the method helps in this updating as the module 107 illustrates. This module investigates about the presence of the target class corresponding to the test image in the catalogue. If the answer is yes, then the target is assigned a class as shown in the module 109. However, if the answer is no, then the method passes to the module 108. Module 108 or FIG. 4 and 5 deals with the task of the creation of new target class corresponding to the test image in the catalogue and the declaration of the creation.
[0044] In one embodiment, the present invention relates to a method for classifying a target from micro-Doppler signatures of the target in real-time, the method comprising: determining a micro-Doppler signature of the target from an echo signals received from the target (103), accumulating the determined micro-Doppler signatures over a pre- determined time period (T) (104), converting the accumulated micro-Doppler signatures into a pixel image of the target with valid pixel values (105), comparing the pixel image of the target with one or more pre-stored images in a catalogue (106), determining a class of the pixel image of the target based on the comparison (107) and in the event of a successful determination, assigning the class to the target (109).
[0045] The method comprises wherein in the event of an unsuccessful determination, creating a new class of the pixel image of the target and assigning the new class to the target (108).
[0046] The method, prior to the step of determining the micro-doppler signature, comprises: receiving echo signals reflected from the target in response to electromagnetic waves transmitted by a radar towards the target and detecting the target based on a constant false alarm rate (CFAR) value derived from the echo signals.
[0047] The method wherein the micro-doppler signature is selected from short-time Fourier transform (STFT), bispectrum and bicoherence.
[0048] The method further comprises creating a catalogue with plurality of pre-stored images corresponding to their respective micro-Doppler signatures of respective targets.
[0049] The new class is created using a deep neural architecture model. The method comprises: training the deep neural architecture model, testing the trained deep neural architecture model with one or more pre-stored images in the catalogue and creating the new class in the event of failure of the deep neural architecture model during testing and training the deep neural architecture model with the new class.
[0050] In another aspect of the present invention relates to a system for classifying a target from micro-Doppler signatures of the target in real-time, the system comprising at least a processing device configured to: determine a micro-Doppler signature of the target from echo signals of received from the target, accumulate the determined micro-Doppler signatures over a pre-determined time period (T), convert the accumulated micro-Doppler signatures into a pixel image of the target with valid pixel values, compare the pixel image of the target with one or more pre-stored images in a catalogue, determine a class of the pixel image of the target based on the comparison and in the event of a successful determination, assign the class to the target and wherein in the event of an unsuccessful determination, create a new class of the pixel image of the target and assigning the new class to the target.
[0051] The processing device is configured to: receive echo signals reflected from the target in response to electromagnetic waves transmitted by a radar towards the target and detect the target based on a constant false alarm rate (CFAR) value derived from the echo signals.
[0052] The processing device is further configured to create the new class through a deep neural network. The processing device is further configured to: train the deep neural network model, test the trained deep neural network model with one or more pre-stored images in the catalogue and create the new class in the event of failure of the deep neural network model during testing and train the neural network model with the new class.
[0053] The present invention method and system uses inception-v3 Deep Convolutional Neural Network (DCNNs). The method and system incorporate 8 different classes of radar targets. These classes are as follows: (1) Human stationary (2) Human moving, (3) Human stationary with weapon, (4) Human moving with weapon, (5) Micro-drone static, (6) Micro-drone towards (7) Micro-drone away (8) Empty Room.
[0054] The figure 1 shows the STFT signatures of different classes. As one can see from the different class signatures, each class has some kind of similarity in a signature. These received target signatures are used to train the present invention model. In testing phase, the method and system of the present invention going to test signature images and predict the class.
[0055] The experimental setup of the present invention is given as below. The present invention implements the training model over inception v3, respectively. The present invention uses learning rate of 0.005.
[0056] In the present invention experiment, a number of iteration (num_iterations) = Number_of_Train_Samples/batch_size, batch_size = 32, img_size = 299 x 299 x3 (3 is number channels), validation size = 0.25. There is total 8 numbers of classes. A human stationary class consist of 17693 images, a human moving class consist of 9431 images, a human stationary with weapon class consist of 13496 images, a human moving with weapon class consist of 11909 images, a micro-drone static class consist of 3528 images, a micro-drone towards class consist of 1210 images, a micro-drone away class consist of 1292 images and an empty Room consist of 1047 images.
[0057] The figure 2 shows the image signatures for the different classes according to an exemplary implementation of the present disclosure/invention.
[0058] Figure 3 shows a flowchart of the method of comparing the test image obtained from the radar with the images available in the catalogue according to an exemplary implementation of the present disclosure/invention.
[0059] The figure shows the flowchart of the method of comparing the test image obtained from the radar with the images available in the catalogue. In training phase shown in Fig.3, collect the data and store them as per their class as shown in module 301. As discussed above, the different data pre-processing tasks is performed in module 302. Further, prepare the classification model and feed the training data to the model as presented in module 303. In module 304, save the trained model for testing purpose.
[0060] Figures 4 and 5 shows the flowchart of the testing DCNN model according to an exemplary implementation of the present disclosure/invention.
[0061] The figures 4 and 5 are showing the flowchart of the method of creating new target class (and detecting existing classes) and image corresponding to the present radar detection (if necessary) and adding them to the catalogue, hence updating the catalogue in real-time. In Fig.4, the flowchart represents the framework of testing phase of the present invention model. In module 401, the method pre-processes the test data as per need.
[0062] Generally, in this module, the present invention has prepared test data just like one prepare the training data. This is done so that model should be able to read it and process it. In model testing block represented by module 402, the present invention feeds the test data to saved model. Later, the present invention model performs the classification task in module 403. Finally, suitable class label is assigned to the test in module 404. This label indicates the potential class for which this test data belongs.
[0063] In Fig.5, a case is presented where the test target is not belonged to existing list of targets as represented in module 501. In module 502, model fails to recognize that target. Now, the present invention saves the test images for which model not able to identify its class as shown in module 503. The present invention also tries to collect more data from that target, which is unknown, if possible. Now the present invention again trains the model with newly added class datasets as shown in module 504. Now, the present invention model is much better to recognize that specific unknown objects. Through this approach the present invention is able to get high accuracy on test data. Like in case of human moving with weapon around 85 percent accuracy could be seen. This accuracy is really good for the situation. The accuracy can be further increased by using much better dataset for classification task.
[0064] In one embodiment, the present invention performs the classification of the radar targets automatically without any human intervention which makes the invention unique and also contributes to the “smartness” of the radar.
[0065] In one embodiment, the present invention is useful in any radar where target classification is necessary and the catalogue for all the relevant target classes is available.
[0066] The main contributions of the present invention method are listed as below:
1. Use of incoming Short Time Fourier Transform (STFT) images of the radar targets. These returned signatures (images) are used for training of the selected DCNN architecture. While, at the test time using only the received STFT signatures of the target the identification task has been carried out. The present invention method uses 8 different classes of objects to train our classification model.
2. The present invention method uses inception-v3 deep neural architecture to train the custom dataset and classification.
3. The present invention method uses sufficient amount of training data which is authentically obtained from real life experiment. However, one can use Generative Adversarial Networks (GANs), but as this target classification is very crucial task because of security reasons. Therefore, it is always better to use data which is authentically generated.
[0067] In one embodiment, the present invention method uses the micro-Doppler signatures of the radar targets and analyzes them with respect to the catalogue in order to classify the radar targets automatically without human intervention.
[0068] Figures are merely representational and are not drawn to scale. Certain portions thereof may be exaggerated, while others may be minimized. Figures illustrate various embodiments of the invention that can be understood and appropriately carried out by those of ordinary skill in the art.
[0069] In the foregoing detailed description of embodiments of the invention, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description of embodiments of the invention, with each claim standing on its own as a separate embodiment.
[0070] It is understood that the above description is intended to be illustrative, and not restrictive. It is intended to cover all alternatives, modifications and equivalents as may be included within the spirit and scope of the invention as defined in the appended claims. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein,” respectively.
| # | Name | Date |
|---|---|---|
| 1 | 202241019736-STATEMENT OF UNDERTAKING (FORM 3) [31-03-2022(online)].pdf | 2022-03-31 |
| 2 | 202241019736-FORM 1 [31-03-2022(online)].pdf | 2022-03-31 |
| 3 | 202241019736-FIGURE OF ABSTRACT [31-03-2022(online)].jpg | 2022-03-31 |
| 4 | 202241019736-DRAWINGS [31-03-2022(online)].pdf | 2022-03-31 |
| 5 | 202241019736-DECLARATION OF INVENTORSHIP (FORM 5) [31-03-2022(online)].pdf | 2022-03-31 |
| 6 | 202241019736-COMPLETE SPECIFICATION [31-03-2022(online)].pdf | 2022-03-31 |
| 7 | 202241019736-Proof of Right [14-06-2022(online)].pdf | 2022-06-14 |
| 8 | 202241019736-FORM-26 [14-06-2022(online)].pdf | 2022-06-14 |
| 9 | 202241019736-Correspondence_Form1_20-06-2022.pdf | 2022-06-20 |
| 10 | 202241019736-FORM 18 [29-05-2023(online)].pdf | 2023-05-29 |
| 11 | 202241019736-POA [04-10-2024(online)].pdf | 2024-10-04 |
| 12 | 202241019736-FORM 13 [04-10-2024(online)].pdf | 2024-10-04 |
| 13 | 202241019736-AMENDED DOCUMENTS [04-10-2024(online)].pdf | 2024-10-04 |
| 14 | 202241019736-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |