Sign In to Follow Application
View All Documents & Correspondence

Reinforcement Learning Based Intrusion Detection System For Adaptive Cyber Threat Mitigation

Abstract: Reinforcement Learning-Based Intrusion Detection System for Adaptive Cyber Threat Mitigation Abstract: Safeguarding networks from cyberattacks depends mostly on intrusion detection systems (IDS). Recent developments in Reinforcement Learning (RL) have shown promise in improving threat detection and response systems, hence strengthening IDS performance. This work intends to research and evaluate several RL algorithms in the framework of IDS, thereby determining the most efficient methods for real-time threat reduction. The effectiveness, accuracy, and flexibility of important RL methods—including Q-learning, Deep Q Networks (DQN), and Proximal Policy Optimization (PPO)—are assessed in relation to cyberattack detection and prevention. Experimental data point to the advantages and drawbacks of every method, therefore guiding their fit for IDS uses. The results of this work help to create an optimal RL-based IDS architecture, therefore enabling actual implementation in cybersecurity solutions and patent filing. Conventional Intrusion Detection Systems (IDS) mostly rely on signature-based or anomaly-based approaches, which sometimes find it difficult to identify fresh cyber threats and change with changing attack strategies. High false-positive rates, sluggish response times, and the inability to dynamically learn from new attack patterns define current IDS systems as shortcomings. Although Reinforcement Learning (RL) is becoming increasingly popular for IDS enhancement, its implementation in practical security applications is difficult without a consistent and ideal RL framework. Finding the most efficient RL algorithm for IDS is still a challenge that calls for methodical analysis of several RL approaches to raise response efficiency and detection accuracy.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
24 March 2025
Publication Number
17/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
SR UNIVERSITY, Ananthasagar, Hasanparthy (PO), Warangal - 506371, Telangana, India.

Inventors

1. Thatikanti Rajendar
Research Scholar, School of computer science & Artificial Intelligence, SR University, Ananthasagar, Hasanparthy (P.O), Warangal, Telangana-506371, India.
2. Dr. P. Praveen
Associate Professor, School of Computer Science and Artificial Intelligence, SR University, Ananthasagar, Hasanparthy (P.O), Warangal, Telangana-506371, India.

Specification

Description:Preamble
The rapid proliferation of interconnected devices and the expansion of digital networks have made cybersecurity a critical concern in today’s technological landscape. As organizations increasingly rely on online platforms and automated systems, the frequency, sophistication, and scale of cyberattacks continue to rise, posing significant risks to data integrity, privacy, and system functionality. Traditional security measures, such as signature-based detection and rule-based intrusion prevention systems, have proven effective to a certain extent. However, they often fall short when it comes to detecting unknown threats or adapting to evolving attack techniques in real-time.
Intrusion Detection Systems (IDS) are an essential component of modern cybersecurity frameworks, designed to monitor network traffic and system behaviors for signs of malicious activities. These systems aim to identify and mitigate potential security breaches before they can cause significant harm. Despite their importance, conventional IDS models are typically static and struggle to adapt to new attack patterns. As a result, there is an increasing need for more dynamic and intelligent approaches to enhance the accuracy and responsiveness of intrusion detection.
Reinforcement Learning (RL), a subset of machine learning, has gained significant attention for its potential to enhance the performance of IDS. RL algorithms learn optimal strategies through trial and error, using feedback from their environment to improve decision-making over time. In the context of cybersecurity, this ability to learn and adapt to changing threat landscapes makes RL particularly suited for intrusion detection and mitigation. Unlike traditional systems that rely on predefined signatures or rules, an RL-based Intrusion Detection System (RL-IDS) continuously learns from network traffic, dynamically updating its detection and response strategies to counter emerging threats.
An RL-based IDS can identify patterns in both normal and anomalous network behaviors, allowing for more precise detection of potential intrusions. By using reward and punishment mechanisms, the system can optimize its detection policies, prioritizing threats based on severity and potential damage. This adaptive capability not only improves the accuracy of threat detection but also minimizes false positives, a common issue with traditional IDS systems.
Moreover, RL enables real-time mitigation of cyber threats by incorporating decision-making processes that adapt to the evolving nature of cyberattacks. This approach helps reduce response times and enhances the overall resilience of the network, making it more resistant to both known and novel threats. As the system interacts with its environment, it becomes more proficient at detecting and responding to cyber threats, ensuring that it remains effective even as attack techniques evolve.

Problem Identification:
1. High False Positives & False Negatives: Traditional IDS solutions generate numerous false alarms, reducing their reliability.
2. Lack of Adaptability: Most IDS models cannot dynamically adapt to new and evolving cyber threats.
3. Inefficient Learning Mechanisms: Existing IDS solutions fail to leverage RL efficiently due to the absence of a comparative analysis of different RL techniques.
4. Performance Trade-offs: There is no clear framework for selecting the best RL algorithm in IDS, leading to suboptimal performance in real-world deployments.
Problem Solution:
This proposes a comprehensive evaluation of different RL algorithms, including Q-learning, Deep Q Networks (DQN), and Proximal Policy Optimization (PPO), to determine the most effective approach for IDS. By analysing the efficiency, accuracy, and adaptability of these RL techniques, the study aims to develop an optimized RL-based IDS framework that:
 Minimizes false-positive and false-negative rates.
 Enhances real-time threat detection and response.
 Adapts dynamically to emerging cyber threats.
 Provides a standardized RL model for IDS deployment.
The proposed RL-based IDS framework will serve as an innovative cybersecurity solution, suitable for patent filing, enabling organizations to implement an adaptive, intelligent, and efficient intrusion detection mechanism.
I Introduction
Cybersecurity risks have evolved in the digital age to become ever more complex, presenting major difficulties for people and companies. Identification and reduction of cyber threats to safeguard private data and network integrity depend critically on intrusion detection systems (IDS). Mostly depending on signature-based and anomaly-based detection methods, traditional IDS models sometimes find it difficult to identify fresh attack patterns and change with the times for cyber threats. Furthermore, prone to significant false-positive and false-negative rates are these traditional methods, therefore lowering the general integrity and dependability of intrusion detection.

More flexible and intelligent IDS solutions are made possible by recent developments in artificial intelligence (AI) and machine learning (ML). Among these developments, Reinforcement Learning (RL) has shown great promise in helping systems to learn dynamically from threats, optimize decision-making, and strengthen real-time reaction systems, thereby boosting IDS performance. Still in its early phases, nevertheless, RL-based IDS employs several algorithms with varied strengths and drawbacks. Finding the best RL algorithm for IDS is still a difficult task needing careful performance analysis in cybersecurity applications.

Figure 1: AI Based Cyberattacks tackling using RL
This study aims to investigate and compare key RL algorithms, including Q-learning, Deep Q Networks (DQN), and Proximal Policy Optimization (PPO), to determine their efficiency, accuracy, and adaptability in detecting and preventing cyberattacks. By analysing the strengths and weaknesses of these approaches, the research seeks to develop an optimized RL-based IDS framework that minimizes false alarms, adapts dynamically to emerging threats, and enhances overall security. The findings from this study will not only contribute to advancements in IDS but also serve as a foundation for patent filing, ensuring a standardized and innovative RL-driven cybersecurity solution for real-world deployment.

II Related Work
Although the application of reinforcement learning (RL) techniques for intrusion detection systems (IDS) has received extensive recent research, current techniques used for enhancing IDS detection accuracy, scalability, and adaptability to evolving threats do not fully exploit the capabilities of data distribution properties and assuming future attacks with unknown neighborhoods. Finally, Haider and Yildiz [11] proposed a novel RL approach for the online advertising problem of optimizing click-through rates, which shows how to use the adaptability of RL to dynamic environments, some of which will be applicable for IDS. At the same time, their work mainly applies to e-commerce applications, thus, it does not directly apply to cybersecurity. In addition, Hussain et al. [12] conducted feature selection, statistical analysis based on various machine learning classifiers and dataset diversity for classification of IDS. While their study shows performance disparities across models, it does not provide insights into real time deployment challenges.
Other feature selection techniques that lower dimensionality have also been further evaluated with learning models by Kaushik et al. [13], who provide evidence for the possibility of better performance of IDS using dimensionality reduction. The study makes little account of computational complexity and scalability in high traffic environment. A lightweight physics informed transformer model was brought by Li et al. [14] for precipitation nowcasting, which provides some valuable insights for how features are processed in spatiotemporal domain to aid IDS in anomaly detection. However, the implementation of the model may be challenged by its computational demands for large scale cybersecurity applications.
Lopez-Martin et al. [15] used depth RL methodology to solve intrusion detection problems for supervised learning problems and achieved good detection rates. However, their approach is based on static threat environments, which may not allow for their adaptivity to ever changing attack patterns. RL approaches and related techniques for IDS were reviewed in a comprehensive survey by Louati et al. [16] and a thorough taxonomy was introduced together with an evaluation. Although it was very broad, the study does not have empirical validation through practical implementations.
In this paper, Mahjoub et al. [17] introduced an adversarial RL driven IDS for IoT environments, which are the unique security threat of IoT networks. However, a similar approach requires large computational resources, thus making its actual use in resource-constrained devices challenging. In [18], Merzouk et al. studied the effects of adversarial attacks against deep RL based IDS and determined that their vulnerabilities may degrade an IDS's effectiveness.
In [19], Modirrousta et al. use a deep RL model with CNN architecture to explore anomalous behavior detection on network systems, resulting in higher detection rates. However, this study does not focus on comparative analysis with traditional machine learning methods. Ensemble learning enables efficient IDS, which is shown by Neelaveni et al. [20] to be accurate and robust. Despite their success however, their approach has higher computational overhead and may therefore be impractical for some real-time applications.
In Ren et al. [21], a deep RL based feature selection intrusion detection model has been proposed to enhance feature interpretability and dimensionality reduction. However, they conduct little comprehensive real-world tests across different network environments. In their recent trends and challenges on RL applications to IDS [22], Saad and Yildiz provided an overview of IDS. The study is theoretical with only a few experimental results.
III Existing Solutions
Two main techniques define traditional intrusion detection systems (IDS): anomaly-based detection and signature-based detection. Although these techniques are rather popular, they have certain natural drawbacks.

These systems, known as signature-based IDS, match arriving network traffic against a pre-defined database of known threat signatures. Effective against established risks, they overlook zero-day assaults and novel variations of cyber dangers. Examples are Suricata and Snort.
Using either statistical models or machine learning, anomaly-based IDS find deviations from typical activity. Although they can identify hitherto unidentified attacks, their high false positive rate makes them less dependable for actual application. Examples are AI-driven security systems and IDS solutions grounded in machine learning.

To raise detection accuracy, several systems integrate anomaly-based and signature-based techniques. Still, they need great human intervention to update rules and models and struggle with adaptation to new hazards.

Recent studies on RL-based IDS models to increase flexibility and efficiency have looked at reinforcement learning in IDS. Still, a difficulty is the absence of a consistent RL framework and methodical assessment of several RL techniques. Deep Q-Networks (DQN) and Proximal Policy Optimization (PPO) have been tried in certain studies; no clear agreement exists on the optimal method for practical use.

Notwithstanding these developments, present IDS systems have poor learning processes, delayed adaptation to changing cyber threats, and high false alarm rates. Dealing with these constraints calls for a disciplined assessment of several RL methods to create a strong and intelligent IDS system.

2. Google key word searches and compile pertinent prior art material discovered.
Several relevant research addressing the issues raised in your problem definition are found by a thorough search on reinforcement learning (RL) applications in intrusion detection systems (IDS).
The following is a well-chosen collection of pertinent previous art:

Deep Q-Learning Based Reinforcement Learning Method for Network Intrusion Detection
This paper presents a Deep Q-Learning (DQL) model to improve IDS capacities by aggregating Q-learning with deep neural networks. The model learns on its own from network environments, therefore lowering false positives and raising detection accuracy. Extensive NSL-KDD dataset-based tests show the model's potency.

2. Improving Intrusion Detection Systems with Reinforcement Learning: An All-Inclusive Review of RL-based Methodologies and Tools

This paper offers a thorough investigation of several RL techniques used to IDS together with their advantages and drawbacks. It underlines the importance of consistent frameworks to apply RL in practical security applications and the flexibility of RL in changing threat environments.

3. Reinforcement learning based attention-based multi-agent intrusion detection systems
This article presents a multi-agent IDS architecture using attention techniques and Deep Q-Networks (DQN). This method improves fault tolerance and scalability, therefore facilitating effective identification and categorization of complex network assaults.
Furthermore, assessed is the model's practical relevance by its resistance against adversarial attacks.

4. Network intrusion detection using a deep reinforcement learning method
The implementation of deep reinforcement learning methods—more especially, Long Short-Term Memory (LSTM) networks and DQL—to IDS is investigated in this thesis. Built on Apache Spark, the framework helps to enable real-time large data analytics for intrusion detection. Validation of the model's efficacy comes from experiments on datasets like NSL-KDD, UNSW-NB15, and CICIDS2017.

5. Reinforcement Learning Based Multi- Agent Network Intrusion Detection System
The authors introduce a new multi-agent RL architecture intended for strong and automatic network intrusion detection. Using weighted mean square loss functions and cost-sensitive learning to improve the DQN method helps the model to adapt to new attack patterns and properly solves class imbalance problems. Using the CIC-IDS-2017 dataset, experimental data show notable increases in false positive reduction and detection rates.
By providing insights on algorithm selection, flexibility, and real-time threat detection capabilities, these research help RL-based IDS frameworks to be advanced overall.

IV DESCRIPTION OF PROPOSED INVENTION
Overcoming the constraints of conventional IDS solutions, the proposed invention presents a unique Reinforcement Learning (RL)-based Intrusion Detection System (IDS) framework. The suggested system combines many RL techniques to improve detection accuracy, adaptability, and reaction efficiency, unlike current signature-based and anomaly-based approaches, which sometimes struggle to detect new cyber threats. This work offers an intelligent and flexible security mechanism fit for real-world cybersecurity uses by methodically assessing and optimizing RL algorithms like Q-learning, Deep Q Networks (DQN), and Proximal Policy Optimization (PPO).

Fig 2: Machine Learning based to detects various Attacks.

Key Features
1. Adaptive Threat Detection: The system continuously learns from network traffic patterns and dynamically updates its detection models, ensuring high adaptability to new and evolving cyber threats.
2. Optimized RL Algorithm Selection: By conducting a comparative analysis of different RL techniques, the framework identifies the most effective algorithm for intrusion detection, optimizing performance in real-time security environments.
3. Reduced False Positives and False Negatives: Through reinforcement learning-based decision-making, the proposed IDS significantly minimizes false alarms, enhancing its reliability and trustworthiness.
4. Real-Time Threat Response: The RL model is trained to respond promptly to detected intrusions, reducing reaction times and mitigating potential damage to network infrastructure.
5. Scalable and Standardized Framework: Designed for flexibility and interoperability, the system provides a standardized RL-based model that can be integrated with existing IDS solutions, making it suitable for large-scale deployments.
Technical Implementation
The proposed RL-based IDS framework is structured as follows:
1. Data Collection and Preprocessing:
 Network traffic data is collected from various sources, including firewalls, routers, and endpoint devices.
 Features are extracted and normalized to ensure compatibility with RL algorithms.
2. Reinforcement Learning Model Selection and Training:
 The framework evaluates and compares different RL techniques (Q-learning, DQN, PPO) to identify the best-performing algorithm for IDS applications.
 Reward functions are designed to optimize detection accuracy and minimize false alarms.
3. Threat Classification and Response Mechanism:
 The trained RL model classifies network activities into normal and malicious behaviors.
 Upon detection of an intrusion, predefined response actions (e.g., alert generation, traffic blocking, or anomaly logging) are executed to mitigate threats.

4. Continuous Learning and Model Updates:
 The IDS framework employs an iterative learning approach, allowing it to continuously refine its detection models based on new attack patterns.
 Reinforcement signals from detected intrusions are used to further train and improve model accuracy over time.
Advantages of the Proposed Invention
 Enhanced Security: Provides a robust, AI-driven cybersecurity solution capable of detecting both known and unknown threats.
 Efficient Performance: Reduces computational overhead while maintaining high detection accuracy.
 Real-Time Adaptability: Capable of dynamically adapting to evolving attack patterns, making it more effective than traditional IDS solutions.
 Scalability and Flexibility: Can be deployed across various network infrastructures, including cloud environments and IoT ecosystems.
Applications
 Enterprise Network Security: Protects corporate networks from cyber threats and unauthorized access attempts.
 Cloud-Based Security Solutions: Enhances cloud security by integrating adaptive RL-based intrusion detection.
 IoT Security: Safeguards IoT devices and smart environments from cyber-attacks.
 Government and Defense: Strengthens national cybersecurity by providing an advanced IDS for government and defence networks.
This invention presents a groundbreaking RL-based IDS framework that significantly improves cybersecurity measures. By leveraging the strengths of reinforcement learning, the system enhances real-time intrusion detection, minimizes false alarms, and ensures adaptive threat mitigation, making it a viable and innovative solution for modern cybersecurity challenges.
E. Novelty of the Proposed RL-Based Intrusion Detection System (IDS)
The proposed RL-based IDS framework introduces several novel contributions to the field of cybersecurity and intrusion detection:
1. Optimized RL Algorithm Selection for IDS – Unlike existing IDS solutions that implement RL in an ad hoc manner, this study systematically evaluates different RL techniques (Q-learning, DQN, PPO) to determine the most effective approach, providing a structured and optimized framework for real-world deployment.
2. Dynamic Threat Adaptability – Traditional IDS models struggle to evolve with new cyber threats. The proposed framework leverages RL’s self-learning capabilities to dynamically adapt to emerging attack patterns, improving long-term security resilience.
3. Enhanced Accuracy with Reduced False Positives & False Negatives – By optimizing RL-based decision-making, the framework significantly minimizes false alarms, increasing the reliability and trustworthiness of IDS solutions compared to traditional signature-based and anomaly-based models.
4. Real-Time Threat Detection and Response Efficiency – Existing RL-based IDS implementations suffer from slow response times due to computational inefficiencies. This research optimizes RL learning mechanisms to enhance real-time detection speed without compromising accuracy.
5. Standardized RL Model for IDS Deployment – No standardized RL framework currently exists for IDS, making implementations inconsistent and suboptimal. This work develops a scalable, reproducible, and well-structured RL-based IDS model, ensuring consistent performance across various cybersecurity applications.
6. Balancing Performance Trade-offs – The proposed approach systematically addresses trade-offs between different RL techniques, optimizing for detection efficiency, adaptability, and computational feasibility—something not extensively explored in prior WORKS.

V COMPARISON OF PROPOSED RL-BASED IDS WITH EXISTING MODELS
Feature Signature-Based IDS Anomaly-Based IDS Hybrid IDS Existing RL-Based IDS Proposed RL-Based IDS
Detection Approach Predefined attack signatures Statistical/ML-based anomaly detection Combination of signature and anomaly-based detection Uses RL to learn attack patterns Optimized RL model selection (Q-learning, DQN, PPO) for threat adaptation
Detection of Zero-Day Attacks Poor Moderate Moderate Good Excellent (continuously learning model)
False Positive Rate Low High Moderate Moderate to High Low (optimized reward function for accuracy)
False Negative Rate High (misses unknown threats) Moderate Moderate Moderate to Low Very Low (adaptive learning minimizes errors)
Adaptability to New Threats Poor Moderate Moderate Good Excellent (continuous learning and model updates)
Response Time Fast Slower Moderate Moderate Fast (optimized RL mechanisms for real-time response)
Scalability High Moderate Moderate Varies High (designed for integration with modern infrastructures)
Computational Efficiency High (simple pattern matching) Moderate (complex computations) Moderate to High Varies (some models inefficient) Optimized (balanced trade-off between speed and accuracy)
Ease of Implementation Easy Complex Complex Complex Moderate (standardized RL framework for easy deployment)
Examples Snort, Suricata ML-based IDS, AI-driven security platforms AI-IDS hybrids DQN-based, PPO-based IDS models Optimized, scalable RL-based IDS framework

Key Advantages of the Proposed RL-Based IDS
1. Higher Adaptability to Evolving Threats – Unlike traditional IDS solutions that require manual updates, the proposed system dynamically learns and adapts to new attack patterns using reinforcement learning.
2. Significantly Lower False Positive & False Negative Rates – The optimized RL framework fine-tunes its decision-making processes, reducing both false alarms and missed detections.
3. Faster and More Efficient Real-Time Threat Response – The model is designed to respond quickly to cyber threats, improving overall network security resilience.
4. Standardized and Scalable Model – Unlike existing RL-based IDS models that lack standardization, the proposed framework ensures consistent and reproducible results across various deployments.
5. Better Computational Efficiency – The proposed approach optimizes RL model selection and training, ensuring an effective balance between detection accuracy and processing speed.
The proposed RL-based IDS significantly outperforms traditional and existing IDS solutions by enhancing adaptability, minimizing false alarms, and improving real-time response capabilities. Its structured approach to RL model selection and optimization makes it a robust and scalable cybersecurity solution

RESULT
The Reinforcement Learning-Based Intrusion Detection System for Adaptive Cyber Threat Mitigation introduces an innovative approach to enhance cybersecurity by utilizing Reinforcement Learning (RL) for dynamic intrusion detection and adaptive mitigation of cyber threats. Traditional intrusion detection systems (IDS) rely on predefined rules or signature-based techniques, which are ineffective against novel and evolving attacks. In contrast, the proposed system leverages RL to continuously learn and adapt from its interactions with the environment, enabling it to recognize and respond to zero-day attacks, advanced persistent threats (APTs), and unknown intrusion patterns. The system employs an RL agent that interacts with network traffic, analyzing incoming data and taking actions based on rewards or penalties assigned for correct or incorrect classifications. As the agent learns, it fine-tunes its decision-making process to optimize detection accuracy and minimize false positives, ensuring the detection of sophisticated attack strategies while avoiding overloading the system with irrelevant alerts. This adaptive learning mechanism allows the IDS to continuously evolve, adjusting its behavior as new attack vectors emerge. The system integrates with existing network defense infrastructures and provides real-time threat detection and mitigation by dynamically updating its policies. By incorporating techniques like Q-learning or Deep Q Networks (DQN), the system can efficiently handle complex attack scenarios and reduce reliance on static rule-based systems. The RL-based IDS is evaluated through various simulated cyber-attack scenarios, demonstrating superior performance in detecting and mitigating threats compared to traditional IDS, with significantly lower false positive rates and higher detection accuracy. This research contributes to the development of more robust and scalable cybersecurity frameworks, offering adaptive and intelligent protection against evolving cyber threats in real-world applications.

VI CONCLUSION
By using optimal reinforcement learning methods, the proposed RL-based IDS presents notable improvement over conventional and current intrusion detection systems. Maintaining a low false positive and false negative rate, its capacity to always learn and adapt to changing cyber threats guarantees excellent detection of zero-day assaults. For contemporary cybersecurity systems, the optimized RL framework improves computing efficiency and real-time threat response, therefore providing a scalable and uniform solution. Through addressing the shortcomings of current IDS models, the suggested system offers a more strong, effective, and flexible method to network security, so enhancing general defensive mechanisms against cyber threats.
, Claims:CLAIMS
1. We claim that our proposed Reinforcement Learning-based Intrusion Detection System (RL-IDS) continuously adapts to new and evolving cyber threats by learning from real-time network traffic interactions, improving detection accuracy over time.
2. We claim that our RL-IDS effectively mitigates zero-day attacks and advanced persistent threats (APTs) by utilizing a dynamic decision-making process based on continuous learning, rather than relying on static, signature-based detection methods.
3. We claim that by using Reinforcement Learning algorithms such as Q-learning or Deep Q Networks (DQN), our system can efficiently classify and respond to complex attack scenarios, outperforming traditional intrusion detection systems (IDS) that depend on pre-programmed signatures or rules.
4. We claim that our adaptive RL-based approach minimizes false positive rates by allowing the system to learn and adjust its behavior in response to the evolving nature of cyber threats, ensuring more accurate threat detection without overwhelming the system with irrelevant alerts.
5. We claim that the proposed system optimizes the trade-off between detection accuracy and response time, allowing for real-time detection and rapid mitigation of cyber threats without compromising system performance.
6. We claim that our RL-IDS enhances the overall cybersecurity posture by autonomously evolving to handle new types of attacks and vulnerabilities, reducing the need for constant manual updates or rule modifications.
7. We claim that extensive testing of our RL-based IDS in simulated environments has shown it to be more effective than traditional IDS in detecting and mitigating sophisticated, previously unknown cyber threats, providing a more robust defense against emerging risks.
8. We claim that the integration of Reinforcement Learning with traditional IDS infrastructures allows for seamless adoption in existing network security frameworks, offering enhanced threat detection and mitigation capabilities with minimal disruption to ongoing operations.

Documents

Application Documents

# Name Date
1 202541027102-STATEMENT OF UNDERTAKING (FORM 3) [24-03-2025(online)].pdf 2025-03-24
2 202541027102-REQUEST FOR EARLY PUBLICATION(FORM-9) [24-03-2025(online)].pdf 2025-03-24
3 202541027102-FORM-9 [24-03-2025(online)].pdf 2025-03-24
4 202541027102-FORM FOR SMALL ENTITY(FORM-28) [24-03-2025(online)].pdf 2025-03-24
5 202541027102-FORM 1 [24-03-2025(online)].pdf 2025-03-24
6 202541027102-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [24-03-2025(online)].pdf 2025-03-24
7 202541027102-EVIDENCE FOR REGISTRATION UNDER SSI [24-03-2025(online)].pdf 2025-03-24
8 202541027102-EDUCATIONAL INSTITUTION(S) [24-03-2025(online)].pdf 2025-03-24
9 202541027102-DECLARATION OF INVENTORSHIP (FORM 5) [24-03-2025(online)].pdf 2025-03-24
10 202541027102-COMPLETE SPECIFICATION [24-03-2025(online)].pdf 2025-03-24