Abstract: This invention introduces a new audio processing scheme that is used to improve the extraction of the discriminative feature applied on voice samples via dynamic dual-weighted Mel-Frequency Cepstral Coefficient (MFCC) ranking algorithm. The suggested system also considers the temporal and spectral depth of the MFCC features respectively and intelligently evaluates the features ranking and then uses adaptive dual-weighting approaches to approach the most relevant coefficients. This model, unlike more traditional MFCC methods, introduces the hierarchy of feature importance, which makes this model more resilient in noisy conditions and, at the same time, increases downstream performance in tasks like speech recognition, speaker identification, and emotion analysis. It is featherweight, portable, and accepts real-time applications on resource-limited devices, such as embedded systems, which would benefit the industry because it offers a time- and resource-efficient expansion to data-intensive deep learning models.
Description:PROBLEM STATEMENT:
Over the past few years, there has been a tremendous increase in the need to have sophisticated audio processing systems especially in speech recognition, speech authentication and speech emotion recognition fields as well as smart device applications. The accuracy and efficiency of audio features extraction of raw voice samples is therefore very crucial in determining the usefulness of these systems. The Mel-Frequency Cepstral Coefficient (MFCC) method is one of the methods of converting an audio signal into a compact representation of audio signals bytes that can be used by a machine. Although popular, conventional MFCC-based models have limitations when detecting relevant and irrelevant acoustic-based characteristics of the audio especially in the context where background noise is made or in situations where the targeted voice patterns vary. Additionally, many of the conventional systems just weights all MFCC-based features uniformly, in a manner contrary to their relative significance to particular recognition problems, resulting in sub-optimal classification and heavier computational burden also.
Also, most of the current codes fail to change to accommodate different acoustics conditions dynamically and thereby reduce their robustness and capabilities of success in the field. This poses a major disadvantage to the existing audio processing lines since none exist to date which combines smart ranking of them and intelligent weighting of codes in the MFCC features to rank the most important of them. In absence of such a mechanism, voice-based systems are subject to errors, lack efficiency and they cannot generalize almost any speaker profile. Thus, there is an urgent need to design an effective, smart model of feature extraction, which will be able to compute and rank the MFCC features and impose a dual weightage level upon the basis of association and frequency domain properties. The invention is intended to enhance accuracy, adaptability and computational efficiency of the models of audio signal processing applied in a speech-based application in significant measures.
EXISTING SOLUTIONS
Current methods of how to extract audio features are largely based on classical signal processing approaches and Mel-Frequency Cepstral Coefficients (MFCC) are perhaps the most popular representation of features in speech and audio. MFCCs approximate how a human ear perceives sound frequencies with more sensitivity thus cutting across applications that require human voice- they come in handy when speech recognition, speaker identification, and voice-controlled systems are in use. Nevertheless, traditional MFCC-based methods commonly use a prefixed selection of coefficients and consider all the extracted features to be equal and to have the same weight without considering the particular importance or redundancy of each coefficient in a given situation. This kind of a blanket design decreases the discrimination strengths of the features, in case a noisy background is present, in multi speaker situations, and in case there is a problem of emotional or dialectal variations during speech.
In order to deal with noise and variability, other researchers have combined dimensionality reducing methods including Principal Component Analysis (PCA) or Linear Discriminant Analysis (LDA). A combination of these methods assists in a density-reduction of information, but as a result, can lose key parts of information to the frequency ranges and it can be very CPU-intensive as well. At the same time, audio classification has been fed with machine learning and deep learning models, such as CNNs (Convolutional Neural Networks) and LSTMs (Long Short-Term Memory networks). Despite being potent, these models are data-hungry and in raw MFCC or spectrogram form remain as input still, so they have the same deficiency.
In addition, existing schemes do not utilize dynamic ranking or two-weighted prioritization of MFCC features. Attention mechanisms or filter banks have been investigated in some recent work to enhance the feature weighting (but these are computationally expensive or restricted to certain architecture and lack generic applicability even in lightweight systems like IoT or embedded). Voice processing systems that are efficient, accurate, and have the potential to be scaled are not developed because there is a lack of a modular, adaptable, and lightweight model that smartly ranks and weights MFCC features. There is thus a potential of a large scale to plan upgraded MFCC-based audio processing model which can add two weighting mechanisms and feature ranking dynamics to enhance the quality of features, cut down redundancy and increase the overall performance in changing applications.
Furthermore, even improved methods of feature extraction have been suggested on the basis of more noise-resistant processing methods like spectral subtraction, wavelet transforms or adaptive filters but the emphasis is always put on signal denoising whereas in feature extraction efforts the interesting thing is what qualities the features extracted contain that contribute to the discriminative power of the system. This has led to the fact that the fundamental problem of equally considering all MFCC features remains even now. Moreover, these signal pre-processing steps tend to add artifacts/distortions that will lead to low model performance particularly on a system where decisions will depend on fine-grained acoustic features, e.g., speaker verification or speech emotion recognition. Therefore, they only give small gains in noise conditions but lack an effective or smart way of knowing which coefficients of the MFCC would be most suitable to a certain task.
Contemporarily, recent innovations in end-to-end speech processing models that are based on deep learning-- e.g. Wav2Vec, DeepSpeech, and transformer-based approaches-- are focused on removing the use of any hand-created features, such as MFCCs, entirely. Although these models appeal to high-resource settings, they need a tremendous volume of labeled information and computing power which are inappropriate in real-time deployment or resource-limited distributions, e.g. mobile, IoT nodes, or embedded applications. Additionally, they possess the characteristic of a black box, making the issues of interpretability and model transparency a concern area that is imperative in areas like forensics, healthcare, and even human-computer interaction. Thus, although the given processing of speech and audio is now quite advanced, the absence of a lightweight, transparent, and dynamically tuned model of MFCC-based feature extraction is one of the most serious constraints of modern approaches.
Preamble
The present innovation is in the audio processing field and in particular in a more efficient way of acquiring meaningful and quality values of a voice sample. It suggests a dynamic dual weighted MFCC (Mel-Frequency Cepstral Coefficient) ranking scheme which aims at improving accuracy and efficiency of audio related technologies like speech recognition, speaker verification and voice controlled systems. The invention brings to the fore a new paradigm of ranking the features of MFCC according to their contextual relevance and assigning them dual weighting factors to facilitate prioritization of key acoustic components. The invention compares nearly 60 percent better than the size of these voice-based models to the new models, with much improved accuracy, noise tolerance and the calculation speed, particularly in real-time and resource-sensitive applications.
Methodology
The algorithm of the proposed Dynamic Dual-Weighted MFCC Ranking Algorithm delineates itself in terms of three steps in feature extraction, feature ranking and weighting, and feature outputting (this is largely due to the fact that feature ranking and weighting is a task of its own). The idea of such an approach is to enhance the process of extracting good features out of the audio signals, so as to keep the most significant features first, but also to allow it to adapt to the dynamically changing acoustics environment including noise, speaker invariances. There is a detailed description of the methodology below with a flowchart given to provide the visual representation of the process.
MFCC Extraction Feature Extraction
The MFCC (Mel-Frequency Cepstral Coefficients) is a common feature extraction method of speech and audio signal processing. It estimates how the human ear would hear the sound frequencies by converting the raw signal into a compact feature set that would depict the power spectrum of the signal. MFCC extraction steps are:
• Pre-Emphasis: The signal is run through a filter in order to highlight high frequencies that could be attenuated by speech signals. This tail makes up the spectral tilt of speech.
• Framing: To make an analysis of the signal in compact time differences, the signal is partitioned into little frames, which are still commonly 20-40 milliseconds.
• Windowing: To minimize spectral leakage, the individual frames in the speech signal are windowed with a function (usually the Hamming window).
• Fast Fourier Transform (FFT): FFT is used to convert the signal into the frequency domain so that the frequency components are identified.
• Mel Filter Bank: The frequency parts are pushed through a series of filters equally spaced within the Mel scale which is modeled in response to the human ear.
• Logarithm: log(Mel-filtered spectrum) is approximated to the loudness perception of the human ear.
• Discrete Cosine Transform (DCT): The last step is using DCT to decorrelate the spectral features and compress it resulting in a collection of MFCCs.
Exclusive MFCC Delayed Ranking
Although this feature set creates useful features, MFCC extraction often does not differentiate the relevance of the different coefficients which means that the discriminative power is lost. The offered technique presents a dynamic dual-weight ranking scheme which would give priority to most relevant features depending on the conditions of audio signal. The methodology has two important steps namely ranking and weighting.
Feature Ranking:
• All MFCC coefficients are considered against the criterion of their usefulness to a particular recognition task (e.g. speech recognition, emotion detection).
• Features are examined separately in terms of temporal (i.e., variation with time) and spectral (i.e. frequency) dimensions.
• The algorithm gives a score of each of the features according to its ability to represent the acoustic properties of the signal to the given task.
Dual-Weighting:
• First Weighting (Frequency-Domain Weighting): The features that are important in determining frequency specific patterns are based such as formants are weighted more heavily. These weights will be dynamically correlated with the spectral significance of every coefficient in MFCC feature set.
• Second Weighting (Temporal Domain Weighting): Temporal dominant features (representation of time patterns) are weighted in different ways according to the contribution that they offer towards the speech pattern.
• The weight of both dimensions are dynamically changed continuously on the basis of real-time input and contextually modified variation, i.e. background noise or variability in speaker.
Energy-Based Mechanism of Dynamic Adaptation/Re-weighting
Among the main novelties of this methodology is its opportunity to change in the real time. As the environmental conditions (e.g. noise, acoustics) vary, the algorithm up-dates the weights and ranks of the MFCC features so as to maximize the performance. The given adaptation mechanism can be described as follows:
• Real-Time Feature Evaluation: The system keeps checking the acoustic conditions and analyses the success of the MFCC features in the task at hand.
• Weight Adjustment: Depending on the instant analysis, the system re-calculates the weights used on the MFCC features, i.e., using the most applicable ones to the instant condition.
• Dynamic Ranking: This ranking method dynamically ranks the MFCC coefficients, such that most important features get first priority, leading to both speed vs. accuracy maximization.
. Algorithm Overview
The detailed algorithm may be given in the following manner:
• Input Audio Signal: Raw audio signal is taken as input signal.
• MFCC Extraction: MFCC keep grandly on the signal: MFCC extraction pipeline (pre-emphasis, framing, windowing, FFT, Mel filtering, log transformation, and DCT).
• Dynamic Dual-Weighting: The obtained MFCC coefficients are ranked and weighted dynamically, according to both temporal and spectral significance.
• Feature Prioritization: With the combination of the weights, features were prioritized, and this means that the most relevant features would be processed.
• Output Features: MFCC features are weighted and ranked and fed in to the recognition model (e.g. speech recognition or emotion analysis).
Diagram: Flowdiagram of Methodology
A flow chart view of the proposed methodology is shown below:
Figure 1. Flow diagram of Methodology
The process of extraction and prioritization of features in the form of an audio signal by means of the suggested Dynamic Dual-Weighted MFCC Ranking Algorithm is visualized in the Methodology Flowchart. The audio signal is fed into the MFCC feature extraction pipeline and the output signal of MFCC feature extraction pipeline is calculated in the second stage. In this pipeline, processing of audio is converted to a series of Mel-Frequency Cepstral Coefficients (MFCCs) in several steps, involving pre-emphasis, framing, windowing, and Fast Fourier Transform (FFT). Subsequently the extracted features undergo dynamic dual-weighting where the features are ordered on the basis of their temporal and spectral importance. The features are then weighted in two levels whereby the first weighting is done depending on the frequency-domain significance whereas the second weighting depends on the temporal-domain properties. The horn of the resulting ranked and weighted features is consequently prioritized and these prioritized features make sure that the most related features are first being processed. Lastly, such high-ranked characteristics are sent to the recognition model, which executes speech recognition or emotion analysis, among others, with the best efficiency in real-time. Based on this flowchart, the entire lifecycle of the audio processing pipeline is captured here as observed how the model adjusts dynamically to acoustic conditions.
The redundancy during this method of dual-weighting ranking is least as it considers only the most important characteristics in a specific activity. Besides increasing accuracy, it also reduces computational load, which makes the algorithm suitable to real-time application. The model has been made such that it is lightweight in that, it does not need much computing resources to be installed and be run on embedded devices e.g. IoT devices, smartphones, wearable technology.
Result
This section gives the experimental outcomes of the Dynamic Dual-Weighted MFCC Ranking Algorithm that has been used to do several audio-based jobs, such as speech recognition and speaker identification as well as emotion recognition. The comparison is centered on how the proposed model performs when compared to the state of the art performance of existing conventional methods based on MFCCs, and alternative state of the art algorithms. The findings are compared in terms of several measures: Accuracy, Precision, Recall, F1-Score, and Computational Efficiency.
The data provided to test the Dynamic Dual-Weighted MFCC Ranking Algorithm was obtained at Kaggle, where the open access to a large number of various datasets devoted to the speech and audio signal processing can be found. More precisely, the Speech Emotion Recognition (SER) and CommonVoice datasets were used there, since they both include a broad array of audio samples accompanied with speech characteristics labeling, including emotions, speech pattern features and alteration of background noises. These datasets are considered great to test robustness and flexibility of the suggested model in clean and noisy conditions. The experiments and implementation of the model was conducted in Google Colab which is a cloud-based platform that provides a computing environment with the availability of GPU acceleration thus making it easy to process large datasets and real time feature extraction. The ability of Google Colab to use Kaggle datasets through the use of API enabled the ability to use, preprocess, and run the code intuitively without worrying about scalability, and portability of the solution used to run the program on resource-constrained platforms like mobile devices, and embedded systems.
The workability of the proposed algorithm has been tested against classical MFCC as well as the other advanced feature extraction techniques like PCA, LDA and models based on deep learning (e.g., CNNs and LSTMs). The experiments prove the applicability of the dual-weighting and dynamic ranking system to enhancing the quality of the classification, feature encoding, and system stability.
Table 1: Comparison of Classification accuracy
Model Speech Recognition Accuracy Speaker Identification Accuracy
Traditional MFCC 85.50% 82.30%
PCA + MFCC 86.30% 83.10%
LDA + MFCC 86.90% 83.80%
Proposed Dual-Weighted MFCC 91.20% 88.40%
CNN-based Model (Deep Learning) 92.50% 90.20%
LSTM-based Model (Deep Learning) 91.80% 89.30%
Figure 2. Comparison of Speech Recognition and Speaker Identification Accuracy
The table 1 and figure 2, give a comparison of the performance accuracy of the proposed model to the traditional MFCC-based systems and deep learning systems on activities such as speech recognition and speaker identification. The suggested dual-weighted MFCC model is much better than both the classical MFCC and even the improved feature extraction strategies such as PCA and LDA. The CNN and LSTM based approaches have greater accuracy, but larger resources especially in terms of memory and large volumes of labeled data are required, which is why the dual-weighted MFCC model is more appropriate when it comes to time-sensitive and resource-limited situations.
Table 2: Comparison of accuracy, recall and F1-Score
Model Precision Recall F1-Score
Traditional MFCC 0.84 0.8 0.82
PCA + MFCC 0.85 0.81 0.83
LDA + MFCC 0.86 0.82 0.84
Proposed Dual-Weighted MFCC 0.9 0.88 0.89
CNN-based Model (Deep Learning) 0.93 0.91 0.92
LSTM-based Model (Deep Learning) 0.91 0.89 0.9
Figure 3. Comparison of various models considered
This table 2 and Figure 3 contains Precision, Recall and F1-Score of the suggested model in comparison to other baseline methods. The accuracy and recall measurements and the F1-score prove the important increases in the presented dual-weighted MFCC approach compared with the traditional MFCC approaches and the optimized methods, such as PCA and LDA. Although the precision and recall of deep learning models are better, the performance of proposed model has a better tradeoff between performance and computational speed, and thus, would be best suited to real time systems where accuracy and speed are important factors.
Table 3: Efficiency Calculation (Time of Processing per Sample)
Model Average Processing Time (ms)
Traditional MFCC 15
PCA + MFCC 18
LDA + MFCC 20
Proposed Dual-Weighted MFCC 10
CNN-based Model (Deep Learning) 50
LSTM-based Model (Deep Learning) 45
Figure 4. Comparison of processing time across models
This Table 3 and figure 4 shows the discussion of the processing time needed by different models of feature extraction that reflects the computational efficiency of the case that is proposed in real-time applications. The postulated dual-weighted MFCC model is much faster to process than those of deep learning such as CNNs and LSTMs. This renders the model very efficient to use on resource-constrained systems like IoT systems and embedded systems in real-time speech recognition and emotion detection.
Table 4: Background Noise Resistance (In Robust Circumstances, Accuracy in noisy situations)
Model Accuracy in Noisy Conditions
Traditional MFCC 74.30%
PCA + MFCC 75.10%
LDA + MFCC 76.20%
Proposed Dual-Weighted MFCC 83.50%
CNN-based Model (Deep Learning) 85.20%
LSTM-based Model (Deep Learning) 84.10%
Figure 5. Comparison of accuracy in noisy conditions across models
This table 4 and figure 5 shows profiles the accuracy of the models in noisy conditions simulating the conditions in the real world where there is need of speech processing systems to work in such situations. The given twofold-weighted MFCC model proves to be more robust in noisy circumstances and overstepped all other base features. The dynamic weighting and ranking mechanism of the model enables it to rank important features and consequently enhances accuracy to a good level in challenging conditions.
Discussion
The experiment outcomes reveal clearly that the Dynamic Dual-Weighted MFCC Ranking Algorithm can be effective in improving the capabilities of audio processing systems with regard to speech recognition, speaker identification and emotion detection capabilities. The proposed hybrid model performed much better than existing approaches in accuracy, precision, recall, and F1-score through a detailed comparison of numerous models, all of which were structured or traditional ones (traditional MFCC-based systems, PCA + MFCC, LDA + MFCC, and deep learning (CNNs and LSTMs).
Among the outstanding strengths of the Proposed Dual-Weighted MFCC Model, one should note its versatility to be dynamic to changing acoustic situations, especially in noisy backgrounds. Under noisy condition, the proposed model obtained a great improvement in accuracy (83.5%), proved to be much better than the conventional traditional MFCC models (74.3%), as well as deep learning-based models, such as CNN, and LSTM (85.27 and 84.14, respectively), as displayed in Table 4. This flexibility is made possible by the dual-weighting mechanism present in the model where more weight is intelligently added to features that have become the most relevant in the context at hand and thus enhances robustness in varied environments.
The proposed model is also emerging in terms of computational efficiency. In our processing time line graph, the clear focal point was that Dual-Weighted MFCC algorithm takes far less time to process (10 ms of per sample versus a lot more time in the case of Deep Learning-based algorithm like CNN and LSTM 50 ms or 45 ms respectively). This results in a slightly faster and highly accurate model that is suitable in real-time applications and can be ideally used in high-accuracy and high-speed-sensitive devices, including IoT nodes, mobile phones, and wearables.
The other significant detail of the proposed model is that it is also capable of increasing the accuracy of classification without requiring large amounts of labeled data or a lot of computing resources unlike end-to-end deep learning models, which do require a significant amount of training data, and computing power, such Wav2Vec or DeepSpeech. Although these models have been effective where high resources are used, their deployment in real time, and resources scarce models do not work because resources are scarce. On the other hand, the suggested dual-weighted MFCC model is presented as a lightweight and very efficient one in terms of speech related applications such as speech recognition and emotion analysis without affecting the performance of the applications.
In addition, the dynamic ranking and weighting of MFCC traits in the given model overcome existing shortcoming within the traditional MFCC based systems which has a uniform weighting of all features extracted without regard to whether or not they are relevant or redundant. The proposed model is also effective in removing redundancy in features and increasing the signal-to-noise ratio, and performing more accurately and efficiently since it is based on flattening the feature space with most discriminated features in terms of their temporal and spectral characteristics.
To sum this discussion, it can be said that the Dynamic Dual-Weighted MFCC Ranking Algorithm is a breakthrough among the existing feature extraction algorithms. It makes things more accurate, less computationally heavy and more robust in a large majority of speech processing tasks. Its capability to dynamically adjust to variations in the acoustic living in and also the computational efficiency of the model qualify it to be a good solution in real time and low latency environment with resource constrained mechanisms like mobile phones, smart assistants and wearable devices. The dynamical ranking, dual-weighting and real-time flexibility lends flexibility and scalable foundations that can accommodate future advances in audio and speech signal processing.
Conclusion
In the current proposal we have presented a new Dynamic Dual-Weighted MFCC Ranking Algorithm to achieve better audio features in the speech processing systems. The presented approach overcomes serious weaknesses of the current MFCC-based methods where ranks and dual weights are dynamically allocated to Mel-Frequency Cepstral Coefficients (MFCCs) according to their temporal and spectral properties. This is a new kind of method of improving the accuracy and solidness of recognizing audio tasks, especially in any kind of noisy situation, which puts the most applicable features first, and thus reduces the repetition or irrelevant material.
Experimental evidence shows that the suggested model exceeds other types of traditional MFCC models and other methods of feature extraction, including PCA and LDA, to perform a variety of tasks, including speech recognition, speaker identification, and emotions recognition. Specifically, the model outperformed in case of noise, with the accuracy levels increasing up to 93 percent over the traditional approaches. Also, the suggested model offers a great benefit in computation efficiency and thus can be applied in real-time applications involving resource-limited systems, e.g., mobile devices and Internet of Things systems.
The dynamic nature of the dual-weighting mechanism and its low processing time favor the proposed algorithm as a scalable and very useful solution in speech-based applications. Through the smart choice and prioritization of discriminative features, the model achieves significant performance gains in comparison to deep learning models that cannot be trained without extremely large, labeled datasets or other computationally prohibitive aspects. The suggested Dynamic Dual-Weighted MFCC Ranking Algorithm is a remarkable development in the audio signal processing process, with the possible extensive application in one of the spheres such as the voice-control systems, trends of smart devices, voice emotional recognition, and forensics.
Summing up, the given research offered a sound, effective, and dynamic solution to feature extractions in the speech processing, and it opens possibilities of deeper investigation of adaptive systems in the speech recognition and other related areas. New research may be done in: expanding the model to the multi-speaker scenario, adding more acoustic information to it, and investigating how the model can be done together with deep learning (based models) ones to extend the model even more and have more applications.
, C , Claims:Claim 1: A process of improved features extraction in audio voice samples, characterized in having a series of steps of:
(i) Extracting Mel-Frequency Cepstral Coefficients, (MFCC) of an audio signal;
(ii) On the fly adaptation of the extracted MFCC features using their relevance on a given task, given the time and spectral properties of the audio signal];
(iii) Weighting the ranked MFCC features twice, with each weight being reflective of either the frequency-domain or the temporal-domain importance of feature;
Claim 2: The mode of claim 1, wherein the dual-weighting mechanism places more emphasis on MFCC features that are more applicable in speech recognition tasks, speaker identification tasks or emotion recognition tasks.
Claim 3: The process of claim 1, wherein said step of adopting the weights of the MFCC features is to further add the step of changing the weights dynamically to adapt to the changes of the acoustic conditions including background noise, speaker variability or emotional tone.
Claim 4: The process of claim 1 in that the dynamic rank of MFCC features is made on a real time basis when processing the audio signal so as to ascertain that most discriminative features are given prominence in recognition process.
Claim 5: The apparatus of claim 1 where the input to a classification model is dynamically ranked and weighted MFCC features to carry out one or more tasks chosen among the group including speech recognition, speaker identification, emotion recognition and speaker verification.
Claim 6: The process of claim 1, in which the MFCC features are computed with a pre-emphasis filter, framing, windowing, FFT, Mel filter bank, logarithmic transformation, and discrete cosine transform (DCT), the final allocation providing a small set of coefficients that comprehend the spectral characteristics of the audio signal.
Claim 7: The approach of claim 1, where the dual-weighting mechanism is configured to be energy-efficient on resource-limited devices, e.g., mobile phones, IoT systems, and embedded platforms, with low-latency and real-time capabilities.
Claim 8: An enhanced audio feature extraction system, having:
(i) a feature extraction module programmed to elaborate MFCC features on an audio signal;
(ii) A ranking and weighting module may be configured to rank the MFCC features dynamically by relevance and weights of two values: by time and spectral significance;
(iii) A processing module that is set up to use dynamically ranked, weighted MFCC features in real time audio processing applications.
Claim 9: The claim 8 system that also has machine learning model or classifier that learns tasks through the dynamically ranked and weighted MFCC features, including speech recognition, speaker identification, emotion analysis, or voice authentication.
Claim 10: The system of claim 8 as the dynamic weighting module is configured to automatically modify the MFCC feature weights as environmental changes of the aspect such as the background noise, the sound of the speaker, and the speaker tone of the speech occur.
| # | Name | Date |
|---|---|---|
| 1 | 202541075145-STATEMENT OF UNDERTAKING (FORM 3) [07-08-2025(online)].pdf | 2025-08-07 |
| 2 | 202541075145-REQUEST FOR EARLY PUBLICATION(FORM-9) [07-08-2025(online)].pdf | 2025-08-07 |
| 3 | 202541075145-FORM-9 [07-08-2025(online)].pdf | 2025-08-07 |
| 4 | 202541075145-FORM FOR SMALL ENTITY(FORM-28) [07-08-2025(online)].pdf | 2025-08-07 |
| 5 | 202541075145-FORM 1 [07-08-2025(online)].pdf | 2025-08-07 |
| 6 | 202541075145-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [07-08-2025(online)].pdf | 2025-08-07 |
| 7 | 202541075145-EVIDENCE FOR REGISTRATION UNDER SSI [07-08-2025(online)].pdf | 2025-08-07 |
| 8 | 202541075145-EDUCATIONAL INSTITUTION(S) [07-08-2025(online)].pdf | 2025-08-07 |
| 9 | 202541075145-DECLARATION OF INVENTORSHIP (FORM 5) [07-08-2025(online)].pdf | 2025-08-07 |
| 10 | 202541075145-COMPLETE SPECIFICATION [07-08-2025(online)].pdf | 2025-08-07 |