Sign In to Follow Application
View All Documents & Correspondence

System And Method To Increase Accuracy In Gait Classifiers Using An Ensemble Technique

Abstract: The present disclosure relates to a method (100) for recognizing human biometric signatures using walking-style patterns from a sequence of image frames. The method classifies the person based on distance vector signals generated using the bounding boxes. The gait cycle of the person is extracted using Fast Fourier transform using four 1D signals of all four projections. The reclassify module compares the result from n different classification algorithms and initiates the reclassification process on the successful mismatch of classification results. The reclassification is monitored using the re-classify counter. A pre-trained result comparator is trained to learn weights corresponding to each algorithm for the specialized scenario. Hence, the accuracy of prediction is increased because of the positive contribution from all the algorithms.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
01 August 2023
Publication Number
06/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Bharat Electronics Limited
Corporate Office, Outer Ring Road, Nagavara, Bangalore - 560045, Karnataka, India.

Inventors

1. SRAVANI, Peram
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India.
2. S, Sreenivas
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India.
3. MALLIPEDDI, Ravi Prakash Reddy
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India.
4. KUMAR, Santosh
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India.

Specification

DESC:TECHNICAL FIELD
[0001] The present disclosure relates, in general, to an artificial intelligence (AI) classifier, and more specifically, relates to a system and method for increasing the accuracy of the classification.

BACKGROUND
[0002] A significant field of research is person detection using visual cues. Fingerprint, face, and iris-based person detection have become very popular because of the vast array of uses for each. The level of cooperation, input quality, and distance, all affect how well the performance turns out. Due to its originality, sturdiness, and security, gait recognition has earned a lot of appeals. The person even in low-resolution videos can be identified by extracting gait features.
[0003] It is challenging to replicate human gait, and it would be concerning if someone tried to mimic another person's walking style in public. An individual based on their unique stride characteristics can be recognized even without their cooperation. In gait recognition, video frames are used to extract distinctive features that vary depending on the person's walking style. By extracting the individual features of each frame in temporal space, the movement of these features in spatial space is determined. To identify a person, the movement in the feature space is matched with the existing data of the person. Despite extensive research, analyzing cross-view sets of data and training with one set while testing with another remain challenging.
[0004] An example of such a system is recited in US 10223582 B2, which discloses the technology of gait recognition based on deep learning. In this invention, the matching model is used to compare the GEI of the person whose identity has to be known with the registered GEIs of each person. During the training phase of the matching model, gait energy images are extracted from the training gait video. Repeatedly, any two GEIs are given to train the matching model until the model converges. During recognition, the GEI of the video is compared with registered GEIs to calculate the similarity index. Finally, the person is identified based on the similarity index.
[0005] Another example is recited in a patent US 9633268 B1 that discloses the technology of gait recognition. In this technology, the convolutional neural network is used to extract the features from images. These features are compared with the gait features of each known person in the matching library and the degree of similarity is calculated to match the person.
[0006] Another example is recited in a patent US 9589365 B2 that explains the mechanism to extract features which are not affected by camera angles. Hence, the art is not susceptible to varying camera angles periodically. First, the key point of the region is extracted. Second, the motion vectors surrounding the key point are extracted. Finally, a feature of the key point which is rotational invariant is extracted. The motion vectors to classify a person are also adjusted such that they are rotational invariant. As a result, the classification results are not affected by camera angles. The motivational idea of the art is taken into consideration in the current invention.
[0007] Yet another example is recited in a patent US 2004/0028503 A1 that explains the recognition of human walking style referred to as gait in three phases namely pre-processing, feature extraction, and classification. The invention provides an idea of how to segment moving objects and track a person in the pre-processing stage. Feature extraction phase extracts periodicity, step length, cadence and stride. The classification phase consists of a classifier algorithm which is pretrained to recognize a person. The art is replicated by our invention in the classification algorithms, which works in similar phases of pre-processing with bounding boxes, feature extraction with distance vectors and stride length. Finally, the classification of the person using 3D-CNN and Mahalanobis distance measure.
[0008] It is desired to overcome the drawbacks, shortcomings, and limitations associated with existing solutions, and develop a unique classification result fusion technique for two different algorithms to predict the person based on their walking style.

OBJECTS OF THE PRESENT DISCLOSURE
[0009] An object of the present disclosure relates, in general, to an AI classifier, and more specifically, relates to a system and method for increasing the accuracy of the classification.
[0010] Another object of the present disclosure is to provide a system for accurately estimating the waking style of a person using existing classifiers and a unique technique to assign weights to the results from the classifiers.
[0011] Another object of the present disclosure is to provide a system that uses a fusion technique which learns based on the advantage of giving more weightage to the algorithm that performs better for the specialized scenario.
[0012] Another object of the present disclosure is to provide a system that uses a nonlinear result fusion learning algorithm which updates its weight for the classification algorithm based on positive and negative feedback from classifier algorithms.
[0013] Another object of the present disclosure is to provide a system that combines both distance vector-based estimation of gait properties and 3D-convolutional neural network (CNN)-based estimation of gait properties. Hence, recognition is not affected significantly by noise.
[0014] Another object of the present disclosure is to provide a system that uses a reclassification module with reclassification counters to avoid any classification errors.
[0015] Yet another object of the present disclosure is to provide a system that can be generalized with any number of classification algorithms where the results are a weighted average of those multiple classification algorithms.

SUMMARY
[0016] The present disclosure relates in general, to an AI classifier, and more specifically, relates to a system and method for increasing the accuracy of the classification. The main objective of the present disclosure is to overcome the drawback, limitations, and shortcomings of the existing system and solution, by providing a gait method of recognising people by their walking patterns.
[0017] The gait has biometric significance since a victim cannot trick the recognition systems in the same way with other biometric systems like fingerprint, iris, or facial recognition. With only one classifier, the present techniques for identifying walking styles have a significant error rate. So, in the proposed invention, the output of a 3D-CNN-based classifier is combined with a classifier based on a signal processing technique utilising a pre-trained result comparator module and reclassify module. Two rounds of training are used to further minimise error rates. The classifier algorithms are learned in the first phase, and the pretrained result comparator module is trained in the second. The images that were utilised to train the outcome comparator module also resemble specific scenarios. As a result, the module is taught to determine which algorithm performs better in a specific case.
[0018] The present disclosure relates to a method for recognizing biometric signatures of a subject using walking style patterns involves generating walking motion signals of the subject by employing bounding box outlines, which are then converted into grayscale images. The method includes extracting a set of features from these grayscale images, applying a first computation to derive distance projections from the bounding boxes, and comparing the distances from pre-trained distance vectors of the subject with the extracted distance vectors during classification. A second computation is utilized to classify the location of body parts of the subject in the spatial dimension and their corresponding movement in the temporal dimension. A fusion approach is implemented to perform reclassification if the results of the first and second computations do not match. Additionally, the results from both computations are trained and classified based on weighted probabilities to enhance accuracy by determining the effective computation for specific conditions.
[0019] The method further encompasses signal processing for the first computation, and a 3D convolutional neural network (3D-CNN) model for the second computation. The set of features extracted pertains to distance projections from bounding boxes, stride length, step angles, step width, step length, or any combination thereof, and serves to improve classification accuracy. The fusion approach comprises a reclassify module and a pretrained result comparator, where the reclassify module resolves discrepancies by reclassifying results when the outputs of the first and second computations are inconsistent. The pretrained result comparator employs a probability distribution algorithm to train based on learning algorithm accuracy, evaluating similar input images considering factors such as camera angles, clothing, and bag carrying, to determine the effective approach for specific conditions. The pretrained result comparator includes a two-phase training process, with the first phase training learning algorithms with dissimilar data and the second phase utilizing the results from the first phase to train the fusion approach.
[0020] Various objects, features, aspects, and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.

BRIEF DESCRIPTION OF THE DRAWINGS
[0021] The following drawings form part of the present specification and are included to further illustrate aspects of the present disclosure. The disclosure may be better understood by reference to the drawings in combination with the detailed description of the specific embodiments presented herein.
[0022] FIG. 1 illustrates an exemplary flow chart of classifier algorithms with a fusion strategy, in accordance with an embodiment of the present disclosure.
[0023] FIG. 2 illustrates an exemplary flow chart showing training phase one of both the signal processing algorithm and 3D-CNN algorithm, in accordance with an embodiment of the present disclosure.
[0024] FIG. 3 illustrates an exemplary flow chart showing training phase two of the result fusion algorithm, in accordance with an embodiment of the present disclosure.
[0025] FIG. 4 illustrates an exemplary block diagram showing the working of a pretrained result comparator with trained algorithm weights, in accordance with an embodiment of the present disclosure.
[0026] FIG. 5 illustrates an exemplary flow chart illustrating phase two of the learning algorithm with positive and negative feedback, in accordance with an embodiment of the present disclosure
[0027] FIG. 6 illustrates an exemplary Nvidia jetson-based processing element which uses data from multiple cameras to derive a classification result, in accordance with an embodiment of the present disclosure.
[0028] FIG. 7 illustrates an exemplary flow chart of a method for recognizing biometric signatures of a subject using walking style patterns, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION
[0029] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[0030] As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
[0031] The present disclosure relates, in general, to an AI classifier, and more specifically, relates to a system and method for increasing the accuracy of the classification.
[0032] The present disclosure is related to a gait method of recognising people by their walking patterns. The gait has biometric significance since a victim cannot trick the recognition systems in the same way with other biometric systems like fingerprint, iris, or facial recognition. With only one classifier, the present techniques for identifying walking styles have a significant error rate. So, in this invention, we combine the output of a 3D-CNN-based classifier with a classifier based on a signal processing technique utilising a pre-trained result comparator module and reclassify module. Two rounds of training are used to further minimise error rates; the classifier algorithms are learned in the first phase, and the pre-trained result comparator module is trained in the second. The images that were utilised to train the outcome comparator module also resemble specific scenarios. As a result, the module is taught to determine which algorithm performs better in a specific case. The present disclosure can be described in enabling detail in the following examples, which may represent more than one embodiment of the present disclosure.
[0033] The advantages achieved by the system of the present disclosure can be clear from the embodiments provided herein. The system for accurately estimating the waking style of a person uses existing classifiers and a unique technique to assign weights to the results from the classifiers. The system uses the fusion technique which learns based on the advantage of giving more weightage to the algorithm that performs better for the specialized scenario. The system uses a nonlinear result fusion learning algorithm which updates its weight for the classification algorithm based on positive and negative feedback from classifier algorithms. The system combines both distance vector-based estimation of gait properties and 3D-CNN-based estimation of gait properties. Hence, recognition is not affected significantly by noise.
[0034] The system uses a reclassification module with reclassification counters to avoid any classification errors. Further, the system can be generalized with any number of classification algorithms where the results are a weighted average of those multiple classification algorithms. The description of terms and features related to the present disclosure shall be clear from the embodiments that are illustrated and described; however, the invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents of the embodiments are possible within the scope of the present disclosure. Additionally, the invention can include other embodiments that are within the scope of the claims but are not described in detail with respect to the following description.
[0035] FIG. 1 illustrates an exemplary flow chart of classifier algorithms with fusion strategy, in accordance with an embodiment of the present disclosure.
[0036] Referring to FIG. 1, the overview flowchart of the entire system consists of both the signal processing algorithm and the 3D-CNN algorithm. The signal processing algorithm classifies the person based on distance vector signals generated using the bounding boxes.
[0037] At block 102, perform background estimation, motion segmentation, human tracking, silhouette extraction and placing bounding box.
[0038] At block 104, initially, the gait cycle of the person is extracted using Fast Fourier transform using four 1D signals of all four projections. Finally, the peak in frequency is the gait cycle frequency of the corresponding person.
[0039] At block 106, the classification consists of Mahalanobis distance estimation of the pre-trained gait cycle of the corresponding person with the gait cycle estimation from the input image frames. At block 108, the signal processing algorithm uses eigenspace transformation based on principle component analysis (PCA) and is applied to reduce the dimensionality of the input feature space. The second algorithm uses 3D-CNN to classify the silhouette images, which gives better classification results based on a person’s clothing, carrying conditions and cross-view.
[0040] At block 110, the fusion algorithm consists of reclassify module and a pre-trained result comparator module. The reclassify module compares the result from n different classification algorithms where n is two and initiates the reclassification process on the successful mismatch of classification results. The reclassification is monitored using the re-classify counter, where failed reclassification after a delta number of times results in a pre-trained result comparator module, which fuses the result from the algorithms using trained weights according to the specialized scenario. At block 112, the pre-trained result comparator is trained to learn weights corresponding to each algorithm for the specialized scenario. Hence, the accuracy of prediction is increased because of the positive contribution from all the algorithms.
[0041] The system consists of walking motion signals generation using bounding box silhouettes and greyscale conversion. The system considers various features which include distance projections from bounding boxes, stride length, step angle right, step angle left, step width left, step width right and step length. The accuracy in classification is improved relative to existing arts because of the increased number of features. The system includes two algorithms, the first algorithm, extracts distance projections from bounding boxes. During classification, Mahalanobis distance from pre-trained distance vectors of the persons and extracted distance vectors are compared.
[0042] The method can include a 3D CNN model as a classifier to capture the location of human parts in spatial dimension and smooth moment of parts in the temporal dimension. The method provides a fusion algorithm which consists of reclassify module and a pretrained result comparator, where reclassification is done when the results of the two learning algorithms are not matched. The method provides a pre-trained result comparator which is pre-trained to use the results of n corresponding algorithms and classify the results with the weighted probabilities of the classification results from the learning algorithms, where n here is two. The probability distribution algorithm, which trains the resulting comparator based on the accuracy of learning algorithms. Here, a similar set of input images corresponding to camera angles, clothing and bag carrying are evaluated. Henceforth, determines which algorithm classifies better in the specialized scenario. Further, the method uses two training phases, one corresponding to a learning algorithm with dissimilar data and the second phase corresponding to training fusion algorithm based on the first phase of trained learning algorithms.
[0043] Thus, the present invention overcomes the drawbacks, shortcomings, and limitations associated with existing solutions, and provides a system for accurately estimating the waking style of a person using existing classifiers and a unique technique to assign weights to the results from the classifiers. The system that uses the fusion technique which learns based on the advantage of giving more weightage to the algorithm that performs better for the specialized scenario. The system uses a nonlinear result fusion learning algorithm which updates its weight for the classification algorithm based on positive and negative feedback from classifier algorithms. The system combines both distance vector-based estimation of gait properties and 3D-CNN-based estimation of gait properties. Hence, recognition is not affected significantly by noise.
[0044] The system uses a reclassification module with reclassification counters to avoid any classification errors. Further, the system can be generalized with any number of classification algorithms where the results are a weighted average of those multiple classification algorithms.
[0045] FIG. 2 illustrates an exemplary flow chart showing training phase one of both signal processing algorithm and the 3D-CNN algorithm, in accordance with an embodiment of the present disclosure. The training of the learning algorithms using the sample dataset is disclosed. The unsimilar training frames can be utilized corresponding to the specialized scenario.
[0046] FIG. 3 illustrates an exemplary flow chart showing training phase two of the result fusion algorithm, in accordance with an embodiment of the present disclosure.
[0047] At block 302, the training of the result fusion algorithm with the pre-trained learning algorithms is disclosed. At block 304, the training of the result fusion algorithm uses a similar set of input images corresponding to the specialized scenario.
[0048] At block 306, if the prediction results from the signal processing algorithm are equal to input data, then, training issues positive feedback for the signal processing algorithm and vice versa. Similarly, if the prediction results from the 3D-CNN algorithm is equal to the input data then, training issues positive feedback for the 3D-CNN algorithm and vice versa.
[0049] At block 308, if the prediction results from both the signal processing algorithm and 3D-CNN algorithm result in negative feedback then the minimum of the weighted prediction values is considered to set positive feedback for the algorithm.
[0050] At block 310, if the prediction result from both the signal processing algorithm and 3D-CNN algorithm results in positive feedback then a maximum of the weighted prediction values is taken into account to set a positive feedback algorithm.
[0051] FIG. 4 illustrates an exemplary block diagram showing the working of pre-trained result comparator with trained algorithm weights, in accordance with an embodiment of the present disclosure.
[0052] The architecture of pre-trained result comparator module, where the algorithm is generalized for n results. Each classifier algorithm provides its classification results based on pre-trained result comparator.
[0053] The classifier matrix is transposed using the equation
??=[????????????…????]
[0054] The Fusion matrix is obtained using the pre-trained weights vector cross product with classifier matrix.
????????????_????????????= ?? * W
[0055] The Fusion result is obtained from the fusion matrix using the equation below
????????????_????????????= ??????{ ????????????_????????????}
[0056] The Fusion result is the scalar value which predicts the person with the accuracy of its weight.
[0057] FIG. 5 illustrates an exemplary flow chart illustrating phase two of the learning algorithm with positive and negative feedback, in accordance with an embodiment of the present disclosure. FIG. 5 shows the training phase of the module to learn the weights for both the algorithms. This module takes results from both the algorithms and analyses which algorithm gives accurate results and assigns gain = "1" to the most accurate algorithm and gain = "0" to the remaining algorithm.
[0058] The weights are updated using the following equation
??????????????h??=ExistingWeight+Gain-ExistingWeight/????????????????????????
[0059] Different scenarios like different camera angles, different clothing conditions and object carrying conditions can be assigned with different weights. The method helps to give more weightage to the accurate algorithm in result prediction, which improves the accuracy of person identification. The algorithm trains in non-linear progression, where it gets saturated at peaks of probability.
[0060] FIG. 6 illustrates an exemplary Nvidia jetson-based processing element which uses data from multiple cameras to derive a classification result, in accordance with an embodiment of the present disclosure. As depicted in FIG. 6, the algorithm 600 uses an Nvidia jetson-based processing element i.e., processor 602 to classify a person based on the sequence of image frames from all the surveillance cameras.
[0061] FIG. 7 illustrates an exemplary flow chart of a method for recognizing biometric signatures of a subject using walking style patterns, in accordance with an embodiment of the present disclosure.
[0062] The method 700 for recognizing biometric signatures of a subject using walking style patterns involves at block 702, generating walking motion signals of the subject by employing bounding box outlines, which are then converted into grayscale images. At block 704, the method includes extracting a set of features from these grayscale images. At block 706, applying a first computation to derive distance projections from the bounding boxes, and at block 708, comparing the distances from pre-trained distance vectors of the subject with the extracted distance vectors during classification. At block 710, a second computation is utilized to classify the location of body parts of the subject in the spatial dimension and their corresponding movement in the temporal dimension. At block 712, a fusion approach is implemented to perform reclassification if the results of the first and second computations do not match. At block 714, the results from both computations are trained and classified based on weighted probabilities to enhance accuracy by determining the effective computation for specific conditions.
[0063] The method further encompasses signal processing for the first computation, and a 3D convolutional neural network (3D-CNN) model for the second computation. The set of features extracted pertains to distance projections from bounding boxes, stride length, step angles, step width, step length, or any combination thereof, and serves to improve classification accuracy. The fusion approach comprises a reclassify module and a pretrained result comparator, where the reclassify module resolves discrepancies by reclassifying results when the outputs of the first and second computations are inconsistent. The pretrained result comparator employs a probability distribution algorithm to train based on learning algorithm accuracy, evaluating similar input images considering factors such as camera angles, clothing, and bag carrying, to determine the effective approach for specific conditions. The pretrained result comparator includes a two-phase training process, with the first phase training learning algorithms with dissimilar data and the second phase utilizing the results from the first phase to train the fusion approach.
[0064] It will be apparent to those skilled in the art that the system 100 of the disclosure may be provided using some or all of the mentioned features and components without departing from the scope of the present disclosure. While various embodiments of the present disclosure have been illustrated and described herein, it will be clear that the disclosure is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the disclosure, as described in the claims.

ADVANTAGES OF THE PRESENT INVENTION
[0065] The present invention provides a system for accurately estimating the waking style of a person using existing classifiers and a unique technique to assign weights to the results from the classifiers.
[0066] The present invention provides a system that uses a fusion technique which learns based on the advantage of giving more weightage to the algorithm that performs better for the specialized scenario.
[0067] The present invention provides a system that uses a nonlinear result fusion learning algorithm which updates its weight for the classification algorithm based on positive and negative feedback from classifier algorithms.
[0068] The present invention provides a system that combines both distance vector-based estimation of gait properties and 3D-CNN-based estimation of gait properties. Hence, recognition is not affected significantly by noise.
[0069] The present invention provides a system that uses a reclassification module with reclassification counters to avoid any classification errors.
[0070] The present invention provides a system that can be generalized with any number of classification algorithms where the results are a weighted average of those multiple classification algorithms.
,CLAIMS:1. A method (700) for recognizing biometric signatures of a subject using walking style patterns, the method comprising:
generating (702), at a processor, walking motion signals of the subject using bounding box outlines, the walking motion signals are converted to grayscale images;
extracting (704), at the processor, a set of features from the grayscale images;
applying (706), at the processor, a first computation to extract distance projections from the bounding boxes;
comparing (708), at the processor, distance from pre-trained distance vectors of the subject with extracted distance vectors during classification;
utilizing (710), at the processor, a second computation to classify location of body parts of the subject in spatial dimension and corresponding movement in temporal dimension;
implementing (712) a fusion approach, to perform reclassification if the classification results from the first computation and the second computation do not match; and
training (714), at the processor, the results from the first computation and the second computation and classify based on weighted probabilities, to enhance accuracy by determining effective computation for specific conditions.
2. The method as claimed in claim 1, wherein the first computation is signal processing approach.
3. The method as claimed in claim 1, wherein the second computation is a 3D convolutional neural network (3D-CNN) model.
4. The method as claimed in claim 1, wherein the set of features pertaining to distance projections from bounding boxes, stride length, step angles, step width, step length and any combination thereof.
5. The method as claimed in claim 1, wherein the set of features enhance the accuracy of classification.
6. The method as claimed in claim 1, wherein the fusion approach comprises a reclassify module and a pre-trained result comparator.
7. The method as claimed in claim 1, wherein the reclassify module is configured to resolve discrepancies by reclassifying the results when outputs of the first computation and second computation do not align.
8. The method as claimed in claim 1, wherein the pretrained result comparator utilizes a probability distribution algorithm that trains the comparator based on the accuracy of learning algorithms, evaluating similar sets of input images corresponding to camera angles, clothing, and bag carrying, to determine the effective approach for specific conditions.
9. The method as claimed in claim 1, wherein the pretrained result comparator comprises training in two phases, a first phase corresponding to training learning algorithms with dissimilar data, and a second phase corresponding to training the fusion approach using the results from the first phase.
10. A system (600) for recognizing biometric signatures of a subject using walking style patterns, the system comprising:
a processor (602) operatively coupled to a memory, the memory storing instructions executable by the processor to:
generate walking motion signals of the subject using bounding box outlines, the walking motion signals are converted to grayscale images;
extract a set of features from the grayscale images;
apply a first computation to extract distance projections from the bounding boxes;
compare distance from pre-trained distance vectors of the subject with extracted distance vectors during classification;
utilize a second computation to classify location of body parts of the subject in spatial dimension and corresponding movement in temporal dimension;
implement a fusion approach, to perform reclassification if the classification results from the first computation and the second computation do not match; and
train the results from the first computation and the second computation and classify based on weighted probabilities, to enhance accuracy by determining effective computation for specific conditions.

Documents

Application Documents

# Name Date
1 202341051679-STATEMENT OF UNDERTAKING (FORM 3) [01-08-2023(online)].pdf 2023-08-01
2 202341051679-PROVISIONAL SPECIFICATION [01-08-2023(online)].pdf 2023-08-01
3 202341051679-POWER OF AUTHORITY [01-08-2023(online)].pdf 2023-08-01
4 202341051679-FORM 1 [01-08-2023(online)].pdf 2023-08-01
5 202341051679-DRAWINGS [01-08-2023(online)].pdf 2023-08-01
6 202341051679-DECLARATION OF INVENTORSHIP (FORM 5) [01-08-2023(online)].pdf 2023-08-01
7 202341051679-FORM-5 [01-08-2024(online)].pdf 2024-08-01
8 202341051679-DRAWING [01-08-2024(online)].pdf 2024-08-01
9 202341051679-CORRESPONDENCE-OTHERS [01-08-2024(online)].pdf 2024-08-01
10 202341051679-COMPLETE SPECIFICATION [01-08-2024(online)].pdf 2024-08-01
11 202341051679-POA [07-10-2024(online)].pdf 2024-10-07
12 202341051679-FORM 13 [07-10-2024(online)].pdf 2024-10-07
13 202341051679-AMENDED DOCUMENTS [07-10-2024(online)].pdf 2024-10-07
14 202341051679-Response to office action [01-11-2024(online)].pdf 2024-11-01