Sign In to Follow Application
View All Documents & Correspondence

System And Method For Generating Dynamic Synthetic Payloads For Security Testing Using Multi Source Contextual Intelligence And Adaptive Fuzzing

Abstract: ABSTRACT Title: SYSTEM AND METHOD FOR GENERATING DYNAMIC SYNTHETIC PAYLOADS FOR SECURITY TESTING USING MULTI-SOURCE CONTEXTUAL INTELLIGENCE AND ADAPTIVE FUZZING A system and method for generating dynamic synthetic payloads for security testing using multi-score contextual intelligence and adaptive fuzzing; wherein the system (10) comprises an ingestion layer (1) to collect internal and external artifacts, a processing unit (2) including a document parser and classifier (21), SBOM analyzer (22), knowledge graph constructor (23), relevance scorer LLM (24), synthetic payload generator (25), adaptive fuzzing engine (26), orchestration layer (27), reporting and visualization module (28), and a snapshot engine (29). The method involves parsing and analyzing artifacts, constructing a contextual property graph, prioritizing components using LLM-based scoring, generating attack payloads, and refining them using reinforcement learning based on execution feedback. The snapshot engine enables version-aware delta testing. Key advantages include cross-domain data fusion, LLM-driven test generation, adaptive fuzzing using RL (PPO/DQN), and systemic risk understanding via graph models, enabling scalable, intelligent, and efficient security testing.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
17 July 2025
Publication Number
40/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Persistent Systems
Bhageerath, 402, Senapati Bapat Rd, Shivaji Cooperative Housing Society, Gokhale Nagar, Pune - 411016, Maharashtra, India.

Inventors

1. Mr. Nitish Shrivastava
10764 Farallone Dr, Cupertino, CA 95014-4453, United States

Specification

Description:FIELD OF INVENTION
The invention relates to the fields of computer science, software testing, cyber security and machine learning. More specifically, it pertains to a system and method for generating synthetic payloads for security testing using multi-score contextual intelligence and adaptive fuzzing.

BACKGROUND OF THE INVENTION
Modern software systems increasingly rely on third-party libraries, open-source components, and external services, which significantly expand the system’s attack surface. As a result, validating the security of such systems has become more complex and requires test strategies that are both scalable and context-aware. Existing security testing tools typically use static payload generators and fixed input sets that do not account for the system’s actual design, software composition, or runtime behavior. This often results in poor vulnerability coverage and an inability to detect issues specific to the architecture or implementation of the application under test.
Organizations generally maintain several internal artifacts including test plans, test cases, architecture diagrams, and design documents that offer valuable insights into the system’s structure and expected behavior. However, current testing workflows rarely incorporate these artifacts when generating test inputs. In parallel, public vulnerability databases such as NIST CVEs, MITRE advisories, and GitHub security feeds provide external intelligence, but they are often disconnected from internal testing pipelines. Even when Software Bill of Materials (SBOM) data is generated using tools like Syft or Trivy, the information is not effectively correlated with known CVEs or linked to component-level testing. Moreover, most tools lack mechanisms for learning from test outcomes, leading to repeated test patterns and stagnant payload generation strategies.
Therefore, the present invention addresses this need by providing a system that unifies internal artifacts and external vulnerability intelligence to drive context-specific and adaptive security testing. It integrates SBOM data, component-level architecture, and public CVE sources into a unified structure that reflects the real-world design and exposure of the system under test. The invention enables identification of coverage gaps and evolving vulnerabilities through graph-based reasoning and relevance scoring. It further incorporates large language model-based payload generation strategies that adapt over time using execution feedback. A snapshot technique is also introduced to capture and compare version-specific system states which thereby supports the regression and delta testing in a time-aware and resource-efficient manner.
Prior Art
US10872157B2 discloses a reinforcement-based system in order to detect system vulnerabilities by training a machine learning agent to generate payloads and observe system behavior to optimize attack strategies. While it introduces reinforcement learning for input selection, the system lacks support for multi-source ingestion of organizational artifacts, SBOM analysis, or integration of public vulnerability databases. It fails to incorporate graph-based contextual modeling or prompt-based payload generation using large language models, nor does it support snapshot-based version testing.
US11501234B2 presents a pervasive risk management system that leverages domain and situational awareness to assess business and operational risk. Although it uses graph-based metrics and contextual signals across IT and OT systems, its main focus is on enterprise-level policy control and security posture analysis. It fails to provide a framework for test payload generation, adaptive validation, or correlation of CVEs with SBOM components and test artifacts.
None of the prior arts provides a comprehensive system that combines internal design documents, architecture diagrams, and test plans with external CVE feeds and SBOM data to generate synthetic payloads. It lacks mechanisms for LLM-driven scoring, adaptive payload refinement through reinforcement, or a unified graph linking vulnerabilities, components, and test cases.
While the above prior art lacks a mechanism to align security testing with the actual structure, vulnerability context, and testing gaps of the system under evaluation, the present invention introduces a system that addresses this disconnect through a dynamic, learning-based approach. With the incorporation of payload generation in both the internal and external understanding of the system, and adapting its strategies based on observed outcomes, the present invention thereby ensures that security validation is not only responsive to known risks but also capable of evolving with the system itself. This allows for more precise, context-aware identification of vulnerabilities that traditional methods often overlook.

DEFINITIONS
The expression “system” used hereinafter in this specification refers to an ecosystem comprising, but not limited to, system with input and output devices, processing unit, plurality of mobile devices, a mobile device-based application. It is extended to computing systems like mobile phones, laptops, computers, PCs, and other digital computing devices.
The expression “input unit” used hereinafter in this specification refers to, but is not limited to, mobile, laptops, computers, PCs, keyboards, mouse, pen drives or drives.
The expression “Large Language Models” or “LLMs” used hereinafter in this specification refers to systems that use natural language understanding to interpret and generate text. In this system, they help extract features and suggest metrics.
The expression “Common Vulnerabilities and Exposures” or “CVEs” used hereinafter in this specification refers to a dictionary or list of publicly known cybersecurity vulnerabilities and exposures. These vulnerabilities are assigned unique, standardized identifiers (CVE IDs) to facilitate communication and coordination among different security teams and organizations.
The expression “OCR” or “Optical Character Recognition” used hereinafter in this specification refers to the process of converting images of text (like scanned documents or photos) into editable and searchable text data.
The expression “Natural Language Processing” or “NLP” used hereinafter in this specification refers to understand, interpret, and generate human language.
The expression “Software Bill of Materials” or “SBOM” used hereinafter in this specification refers to a tool that reads and interprets a Software Bill of Materials (SBOM), providing insights into the components, dependencies, and potential vulnerabilities within a software application.
The expression “reinforcement learning models” or “RL models” used hereinafter in this specification include models such as “proximal policy optimization (PPO)” and “deep Q-networks (DQN)”, that refer to pre-defined instructions used to enable intelligent agents to learn optimal strategies through trial and error in dynamic environments. In the context of this system, these models adaptively refine payload generation strategies based on real-world test feedback (e.g., crashes, errors, anomalies).
The expression “deep Q-networks (DQN)” used hereinafter in this specification refers to a value-based method that uses deep neural networks to approximate Q-values, helping the agent choose actions that maximize long-term rewards.
The expression “proximal policy optimization (PPO)” used hereinafter in this specification refers to policy-based pre-defined instructions that improves upon previous strategies by making small, stable updates to the policy using a clipped objective function, which prevents drastic changes and ensures efficient learning.

OBJECTS OF THE INVENTION
The primary object of the present invention is to provide a system and method for generating dynamic synthetic payloads for security testing using multi-score contextual intelligence and adaptive fuzzing.
Another object of the invention is to provide a cross-domain data fusion that integrates test plans, design, software bills of materials (SBOMs) and common vulnerabilities and exposures (CVEs) databases into a single intelligence pipeline.
Yet another object of the invention is to provide a fine-tuned relevance scoring that uses a domain-adapted LLM to calculate component and test priority for synthetic generation.
Yet another object of the invention is to provide an LLM-driven payload design that uses prompting strategies with LLMs to generate realistic, targeted payloads.
Yet another object of the invention is to provide a reinforcement learning-driven adaptation that improves payload generation over time based on live feedback from test execution.
Yet another object of the present invention is to provide a graph-based contextual understanding of components, functions, vulnerabilities and test cases to understand systemic risk.

SUMMARY
Before the present invention is described, it is to be understood that the present invention is not limited to specific methodologies and materials described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention.
The invention discloses a system and method for generating dynamic synthetic payloads for security testing using multi-source contextual intelligence and adaptive fuzzing; wherein the system comprises the input unit, called the ingestion layer, that collects data from diverse internal and external sources including test plans, architecture diagrams, SBOMs, and CVE databases. The processing unit includes a document parser and classifier, SBOM analyzer, knowledge graph constructor, relevance scorer LLM, synthetic payload generator, adaptive fuzzing engine, and an orchestration layer responsible for feedback-based test execution. It also comprises reporting and risk visualization, and a snapshot engine for capturing system state. The output unit presents results such as attack vectors, annotated SBOMs, and reports.
The system follows a structured multi-step method. Initially, the ingestion layer (1) collects and normalizes structured and unstructured artifacts. The parser and classifier (21) categorize these documents using NLP and OCR techniques. The SBOM analyzer (22) identifies and maps components to known CVEs. A knowledge graph (23) model relationships across components, CVEs, and test cases. Using fine-tuned LLMs, the relevance scorer (24) ranks high-risk targets and gaps in coverage. Synthetic payloads are generated (25) using prompt-based LLMs for various attack types. The adaptive fuzzing engine (26) applies reinforcement learning (e.g., PPO/DQN) based on execution outcomes such as crashes or anomalies. The orchestration layer (27) coordinates this pipeline via DAG-based execution and distributes tests. The snapshot engine (29) stores versioned system states for delta analysis, regression testing, and contextual awareness. Results are visualized and reported through component (28).
The invention offers significant advantages for intelligent and scalable security testing like cross-domain data fusion integrates varied technical and security artifacts into a single analysis pipeline, fine-tuned relevance scoring that enables accurate prioritization of components and test scenarios, LLM-driven payload generation creates realistic, context-sensitive attack vectors beyond static rule-based methods, reinforcement learning-driven adaptation based on live test feedback, improving payload effectiveness over time, the graph-based contextual understanding that connects system elements via a knowledge graph, snapshot engine provides version-aware, time-specific testing support, enabling regression analysis and efficient test reuse, making the solution robust, intelligent, and adaptive.

BRIEF DESCRIPTION OF DRAWINGS
A complete understanding of the present invention may be made by reference to the following detailed description which is to be taken in conjugation with the accompanying drawing. The accompanying drawing, which is incorporated into and constitutes a part of the specification, illustrates one or more embodiments of the present invention and, together with the detailed description, it serves to explain the principles and implementations of the invention.
FIG.1. illustrates a system architecture overview; and a stepwise method.
FIG.2. illustrates a snapshot engine flow diagram.
FIG.3. illustrates a payload generation and feedback loop.

DETAILED DESCRIPTION OF INVENTION
Before the present invention is described, it is to be understood that this invention is not limited to methodologies described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention. Throughout this specification, the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps. The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the invention to achieve one or more of the desired objects or results. Various embodiments of the present invention are described below. It is, however, noted that the present invention is not limited to these embodiments, but rather the intention is that modifications that are apparent are also included.
The present invention describes a system and method for generating dynamic synthetic payloads for security testing using multi-score contextual intelligence and adaptive fuzzing. The system (10) comprises of an input unit referring to an ingestion layer (1), processing unit (2) further comprising document parser and classifier (21), software bill of materials (SBOM) analyzer (22), knowledge graph constructor (23), relevance scorer LLM (24), synthetic payloads generator (25), adaptive fuzzing engine (26), text execution and feedback loop referring to orchestration layer (27), reporting and risk visualization (28), snapshot engine (29) and an output unit (3).
In an embodiment of invention, the input layer; here the ingestion layer (1) ingests multiple types of data such as internal artifacts including test plans, test cases, architecture diagrams, design documents and external sources including CVE that feeds from NIST, MITRE, GitHub security advisories and further SBOM data generated from tools like syft, trivy. Each artifact is parsed and semantically classified using OCR (Optical Character Recognition) and NLP (Natural Language Processing) and language models; using a document parser and classifier (21) analyzes and categorizes documents into predefined classes or types based on their content.
In a next embodiment of the invention, the software bill of materials SBOM analyzer (22) generates or consumes existing SBOM data to identify constituent libraries, packages and components, further it maps components to known common vulnerabilities and exposures (CVEs) and marks them for prioritization based on exploitability and frequency.
In the next embodiment of the invention, the knowledge graph builder or constructor (23) is configured to link all parsed data into a unified property graph includes nodes that represent components, endpoints, CVEs, test cases, architecture blocks; and the edges define relationships such as dependency, exposure, coverage or CVE linkage. The constructed graph is enriched using open-source data and analyzed with graph neural networks (GNNs). Further, the relevance scorer (24) uses fine-tuned LLM to assess certain parameters such as which components are high risk, which test cases lack coverage for exposed components, or which CVEs are relevant to specific SBOM items; and assigns relevance scores to prioritize testing.
In yet a next embodiment of the invention, the synthetic payload generator (25) uses prompt-based LLMs and machine learning (ML) models to generate various types of payloads including, but not limited to SQL injection, XSS, path traversal, RCE, malformed JSON/XML or large file inputs. Payloads are parameterized and stored with metadata.
In yet a next embodiment of the invention, the adaptive fuzzing engine (26) implements reinforcement learning with reward signals from test outcomes such as crashes, memory leaks, HTTP 500s, stack traces, unexpected behaviors. Further, the engine (26) uses RL models (e.g., PPO, DQN) to evolve payload generation strategies; and tracks effectiveness over time to avoid redundant testing. Both the approaches viz. PPO and DQN help the fuzzing engine evolve over time by learning which types of payloads are more likely to uncover security vulnerabilities, leading to smarter, non-repetitive, and increasingly effective testing.
In yet next embodiment of the invention, the text execution and feedback loop or the orchestration layer (27) uses DAG-based systems to sequence ingestion, scoring, generation, execution and feedback, is also allows distributed execution across multiple nodes (Kubernetes/ray). The orchestration layer (27) configured to drive the payload generation and feedback loop further comprises of:
a component / CVE selection (27.1) that identifies the device-based application component or known vulnerability to target,
a prompted LLM generation (27.2) that uses a prompt-driven LLM to generate attack payloads,
a payload corpus (27.3) stores the generated payloads for testing,
a test runner (27.4) that executes payloads against the target system,
a test outcome (27.5) that include 500s, leaks, anomalies configured to collect test results indicating server errors or anomalies,
a reward scoring engine (27.6) evaluates test outcomes to assign reward scores based on effectiveness, and
a RL model PPO/DQN (27.7) that uses reinforcement learning to improve payload generation based on reward feedback. Further, the reporting and risk visualization (28) reports the feedback and visualizes the execution and feedbacks.
In yet next embodiment of the invention, the snapshot engine (29) employs a method that freezes the state of the target system including architecture configuration, SBOM snapshot, design logic context and active test coverage, further this snapshot is stored as a composite vector and graph embedding that is used to compare historical versus current state, applied for delta testing it only generates new payloads for new/modified components, serves as a versioned reference point for recurring testing or regression testing. Each snapshot is uniquely identified and can be used to, rewind the fuzzing engine to test previous system versions, analyze which CVEs were applicable then vs now, guide LLM prompts with “time-specific” system context, allowing the system to be time-aware, version-aware, and cost-efficient in testing. Further, the output unit (3) displays final outputs include a synthetic payload corpus with coverage maps, an annotated SBOM with attack vectors, risk visualization and a summary report.
In a preferred embodiment of the invention, the system employs a stepwise method for generating dynamic synthetic payloads for security testing using multi-score contextual intelligence and adaptive fuzzing, comprising the steps of;
- receiving and channeling input data from multiple sources by the ingestion layer (1),
- extracting and categorizing information from documents by the document Parser and Classifier (21),
- examining software bills of materials for component insights by SBOM Analyzer (22),
- constructing a structured representation of the extracted knowledge by the Knowledge Graph Builder (23),
- evaluates the importance of information using a language model by the Relevance Scorer LLM (24),
- creating artificial test payloads based on relevance using Synthetic Payload Generator (25)
- dynamically adjusts testing based on system responses using the Adaptive Fuzzing Engine (26),
- runs test cases and refines them using feedback by the orchestration layer (27)
- presenting findings and risk levels in an interpretable format by reporting and Risk Visualization (28),
- captures and stores system states for future reference and tracking by Snapshot Engine (29).
In yet a next preferred embodiment, the snapshot engine as illustrated in Fig. 2. employs the workflow comprising the steps as follows:
- initiating the target System, which is the source environment where the software or system under test resides,
- capturing a versioned snapshot including SBOM architecture coverage and test plans using the snapshot engine,
- updating the knowledge graph with contextual information from the versioned snapshot LLM,
- receiving the delta context from the knowledge graph to prepare inputs by the generator,
- executing synthetic payloads by the tester to evaluate the system and gather results,
- storing test outcomes by the snapshot engine and linking them with the corresponding snapshot,
- integrates the results by the Knowledge Graph to refine the contextual understanding LLM and using refined context by the generator to generate new test payloads for improved coverage,
- running the newly generated test payloads by the tester on the target system for continuous validation.
According to an embodiment of the invention, the system and method of the present invention offers several key advantages that enhance automated testing and security assessment workflows, such as- cross-domain data fusion allows seamless integration of diverse inputs such as test plans, design artifacts, SBOMs (Software Bill of Materials), and CVE databases, creating a unified intelligence pipeline; using fine-tuned relevance scoring via a domain-adapted LLM enables precise prioritization of components and test scenarios for synthetic generation; employing LLM-driven payload design, moving beyond traditional rule-based fuzzing to generate realistic and targeted payloads through advanced prompting techniques; reinforcement learning-driven adaptation, the payload generation process continuously improves, learning from real-time test feedback to enhance effectiveness, enabling a graph-based contextual understanding model that maps relationships across components, functions, vulnerabilities, and test cases using a property graph, enabling deep insights into systemic risk and test coverage gaps.
While considerable emphasis has been placed herein on the specific elements of the preferred embodiment, it will be appreciated that many alterations can be made and that many modifications can be made in preferred embodiment without departing from the principles of the invention. These and other changes in the preferred embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the invention and not as a limitation.
, Claims:CLAIMS:
We claim,
1. A system and method for generating dynamic synthetic payloads for security testing using multi-score contextual intelligence and adaptive fuzzing;
wherein the system (10) comprises of an ingestion layer (1), processing unit (2) further comprising document parser and classifier (21), software bill of materials (SBOM) analyzer (22), knowledge graph constructor (23), relevance scorer LLM (24), synthetic payloads generator (25), adaptive fuzzing engine (26), text execution and feedback loop referring to orchestration layer (27), reporting and risk visualization (28), snapshot engine (29) and an output unit (3), working in co-ordination to employ a stepwise method for generating dynamic synthetic payloads for security testing;

characterized in that:
the system employs a stepwise method for generating dynamic synthetic payloads for security testing comprising the steps of;
- receiving and channeling input data from multiple sources by the ingestion layer (1),
- extracting and categorizing information from documents by the document Parser and Classifier (21),
- examining software bills of materials for component insights by SBOM Analyzer (22),
- constructing a structured representation of the extracted knowledge by the Knowledge Graph Builder (23),
- evaluates the importance of information using a language model by the Relevance Scorer LLM (24),
- creating artificial test payloads based on relevance using Synthetic Payload Generator (25)
- dynamically adjusts testing based on system responses using the Adaptive Fuzzing Engine (26),
- runs test cases and refines them using feedback by the orchestration layer (27)
- presenting findings and risk levels in an interpretable format by reporting and Risk Visualization (28),
- captures and stores system states for future reference and tracking by Snapshot Engine (29).

2. The system and method as claimed in claim 1, wherein the ingestion layer (1) ingests plurality of data types including internal artifacts such as test plans, test cases, architecture diagrams, design documents; external sources such as CVE that feeds from NIST, MITRE, GitHub security advisories and SBOM data generated from tools like syft, trivy; wherein eacg artifact is parsed and semantically classified using OCR (Optical Character Recognition) and NLP (Natural Language Processing) and language models; using a document parser and classifier (21) that analyzes and categorizes documents into predefined classes or types based on their content.

3. The system and method as claimed in claim 1, wherein the software bill of materials (SBOM) analyzer (22) generates or consumes existing SBOM data to identify constituent libraries, packages and components, further it maps components to known common vulnerabilities and exposures (CVEs) and marks them for prioritization based on exploitability and frequency.

4. The system and method as claimed in claim 1, wherein the knowledge graph builder or constructor (23) links all parsed data into a unified property graph including nodes that represent components, endpoints, CVEs, test cases, architecture blocks; and the edges define relationships such as dependency, exposure, coverage or CVE linkage; enriching the constructed graph using open-source data, analyzed with graph neural networks (GNNs).

5. The system and method as claimed in claim 1, wherein the relevance scorer (24) uses fine-tuned LLM to assess certain parameters such as which components are high risk, which test cases lack coverage for exposed components, or which CVEs are relevant to specific SBOM items; and assigns relevance scores to prioritize testing.

6. The system as claimed in claim 1, wherein the synthetic payload generator (25) uses prompt-based LLMs and machine learning (ML) models to generate various types of payloads including, but not limited to SQL injection, XSS, path traversal, RCE, malformed JSON/XML or large file inputs; which are parameterized and stored with metadata.

7. The system and method as claimed in claim 1, wherein the adaptive fuzzing engine (26) implements reinforcement learning with reward signals from test outcomes such as crashes, memory leaks, HTTP 500s, stack traces, unexpected behaviors; uses RL models to evolve payload generation strategies; and tracks effectiveness over time to avoid redundant testing.

8. The system and method as claimed in claim 1, wherein the text execution and feedback loop or the orchestration layer (27) uses DAG-based systems to sequence ingestion, scoring, generation, execution and feedback, is also allows distributed execution across multiple nodes; wherein the orchestration layer (27) is configured to drive the payload generation and feedback loop using;
- a component / CVE selection (27.1) that identifies the device-based application component or known vulnerability to target,
- a prompted LLM generation (27.2) that uses a prompt-driven LLM to generate attack payloads,
- a payload corpus (27.3) stores the generated payloads for testing,
- a test runner (27.4) that executes payloads against the target system,
- a test outcome (27.5) that include 500s, leaks, anomalies configured to collect test results indicating server errors or anomalies,
- a reward scoring engine (27.6) evaluates test outcomes to assign reward scores based on effectiveness, and
- a RL model PPO/DQN (27.7) that uses reinforcement learning to improve payload generation based on reward feedback.

9. The system and method as claimed in claim 1, the snapshot engine (29) employs a workflow comprising the steps of;
- initiating the target system, which is the source environment where the software or system under test resides,
- capturing a versioned snapshot including SBOM architecture coverage and test plans using the snapshot engine,
- updating the knowledge graph with contextual information from the versioned snapshot LLM,
- receiving the delta context from the knowledge graph to prepare inputs by the generator,
- executing synthetic payloads by the tester to evaluate the system and gather results,
- storing test outcomes by the snapshot engine and linking them with the corresponding snapshot,
- integrates the results by the Knowledge Graph to refine the contextual understanding LLM and using refined context by the generator to generate new test payloads for improved coverage,
- running the newly generated test payloads by the tester on the target system for continuous validation.

10. The system and method as claimed in claim 1, wherein the snapshot engine (29) freezes the state of the target system including architecture configuration, SBOM snapshot, design logic context and active test coverage; stores it as a composite vector; uses graph embedding to compare historical versus current state, applied for delta testing; generates new payloads for new/modified components; serves as a versioned reference point for recurring testing or regression testing; wherein each snapshot is uniquely identified and used to rewind the fuzzing engine to test previous system versions, analyze which CVEs were applicable then vs now, guide LLM prompts with “time-specific” system context, allowing the system to be time-aware, version-aware, and cost-efficient in testing.
Dated this 17th day of July, 2025.

Documents

Application Documents

# Name Date
1 202521068261-STATEMENT OF UNDERTAKING (FORM 3) [17-07-2025(online)].pdf 2025-07-17
2 202521068261-POWER OF AUTHORITY [17-07-2025(online)].pdf 2025-07-17
3 202521068261-FORM 1 [17-07-2025(online)].pdf 2025-07-17
4 202521068261-FIGURE OF ABSTRACT [17-07-2025(online)].pdf 2025-07-17
5 202521068261-DRAWINGS [17-07-2025(online)].pdf 2025-07-17
6 202521068261-DECLARATION OF INVENTORSHIP (FORM 5) [17-07-2025(online)].pdf 2025-07-17
7 202521068261-COMPLETE SPECIFICATION [17-07-2025(online)].pdf 2025-07-17
8 Abstract.jpg 2025-08-04
9 202521068261-FORM-9 [26-09-2025(online)].pdf 2025-09-26
10 202521068261-FORM 18 [01-10-2025(online)].pdf 2025-10-01