Abstract: The invention provides a system and method for incrementally fine-tuning large language models using curated datasets managed in a lineage-aware very large database. Data from heterogeneous enterprise sources is ingested, transformed, and stored with lineage, embeddings, and graph indexes. A change capture mechanism detects deltas in upstream systems, and an impact analyzer expands the affected set using lineage joins, graph closure, and semantic similarity. A Model Consumption Pack – Delta (MCP-DELTA) is constructed containing updated training examples, invalidated counter-examples, contextual neighbors, and a manifest including schema hashes, lineage slices, and policy decisions. Parameter-efficient fine-tuning updates adapters while keeping base parameters frozen, with replay sampling and stability mechanisms to mitigate catastrophic forgetting. Evaluation and gating modules apply regression, delta-sensitive, and safety tests to govern promotion or rollback. Periodic roll-up consolidates multiple MCP-DELTA into refreshed MCP-Full baselines. The invention enables transactional, lineage-aware updates to large language models, ensuring efficiency, freshness, auditability, and compliance.
Description:FIELD OF THE INVENTION
The present invention relates generally to artificial intelligence and database systems. More specifically it relates to system and method for incremental fine-tuning of large language models using curated datasets managed within very large databases (VLDBs).
BACKGROUND OF THE INVENTION
In most enterprises today, data is constantly changing. There are various new records that are added, the existing information is modified, and the old entries are deleted across multiple systems ranging from legacy relational databases to modern APIs and document stores. Large language models (LLMs), however, are typically trained on large but static datasets, which makes them quickly become outdated as enterprise data evolves.
Traditional fine-tuning approaches require retraining the model on the entire dataset whenever new data becomes available. This process is extremely resource-intensive, slow, and impractical at enterprise scale. It also introduces risks of catastrophic forgetting, where important past knowledge is lost, and lacks the traceability required for compliance and audits. Existing solutions for continual learning often fall short because they cannot handle the scale of enterprise data, provide limited mechanisms for tracking lineage of training data, or fail to support transactional updates. As a result, enterprises are unable to maintain models that are both current and reliable without incurring significant cost and complexity.
Therefore, there is a need for a system that can incrementally update large language models using only the changes or deltas detected in curated enterprise datasets. Such a system should be able to track lineage, capture semantic and relational dependencies, generate targeted update packs, and apply parameter-efficient fine-tuning methods. This would allow organizations to keep their LLMs fresh, efficient, auditable, and aligned with evolving enterprise knowledge at scale.
Prior Art:
For instance, WO2024215729A1 discloses adapter-based parameter-efficient fine-tuning techniques in which small modules (adapters) are trained while the base model remains frozen. The disclosure provides valuable approaches for improving efficiency in updating large models and supports modular deployment of adapter components. However, the focus is limited to the adapter mechanism itself. It does not describe a lineage-aware framework for mapping enterprise data changes to training updates, nor does it propose a system for constructing structured delta packs (MCP-DELTA) containing updated and invalidated examples tied to source provenance. Further, it lacks any disclosure of impact analysis through graph closure and semantic neighbor expansion, evaluation and gating workflows for transactional promotion or rollback, or roll-up compaction to maintain long-term model stability.
US20230061341A1 discloses techniques for combining data lineage tracking with vector search in order to identify semantically similar or related records. While it advances methods for provenance-aware retrieval and embedding-driven similarity, its focus remains on improving record search and discovery. It does not extend these mechanisms into a model fine-tuning pipeline, nor does it disclose packaging training updates as delta artifacts for incremental Large Language Model (LLM) adaptation. Moreover, it lacks parameter-efficient fine-tuning methods, gating and rollback semantics, or the governance-by-design policies for PII masking and auditability that are central to the present invention.
Although the above prior arts contribute important elements in the areas of adapter-based training and lineage-aware vector retrieval, neither provides a comprehensive, end-to-end framework for incremental fine-tuning of large language models driven by transactional deltas in enterprise data.
DEFINITIONS
The expression “system” used hereinafter in this specification refers to an ecosystem comprising, but is not limited to a system with a user, input and output devices, processing unit, plurality of mobile devices, a mobile device-based application to identify dependencies and relationships between diverse businesses, a visualization platform, and output; and is extended to computing systems like mobile, laptops, computers, PCs, etc.
The expression “input unit” used hereinafter in this specification refers to, but is not limited to, mobile, laptops, computers, PCs, keyboards, mouse, pen drives or drives.
The expression “output unit” used hereinafter in this specification refers to, but is not limited to, an onboard output device, a user interface (UI), a display kit, a local display, a screen, a dashboard, or a visualization platform enabling the user to visualize, observe or analyze any data or scores provided by the system.
The expression “processing unit” refers to, but is not limited to, a processor of at least one computing device that optimizes the system.
The expression “Very Large Database (VLDB)” used hereinafter in this specification refers to, a database containing such a massive amount of data often in the terabyte range or billions of rows that it requires specialized techniques and management methodologies for storage, processing, and maintenance, beyond what traditional systems can handle efficiently.
The expression “incremental learning system” used hereinafter in this specification refers to a method, particularly in machine learning and education, where knowledge is acquired gradually over time, building upon existing information rather than discarding it.
The expression "Transactional deltas" refers to, but is not limited to, a mechanism for capturing and processing changes (deltas) made to data or systems within a transactional framework, ensuring data integrity and enabling efficient updates.
The expression “Lineage graph” used hereinafter in this specification refers to, providing a visualization for the flow of data through your data stack, displaying upstream entities to the left and downstream entities to the right.
The expression “A Model Consumption Pack (MCP)” refers to, but is not limited to, in software refers to the practice of pricing and packaging a software product based on the customer's actual usage.
OBJECTS OF THE INVENTION
The primary object of the present invention is to provide a system and method for incremental fine-tuning of large language models using curated datasets stored in a lineage-aware very large database (VLDB).
Another object of the invention is to detect changes (deltas) in enterprise data through change data capture, and to map those deltas to curated training records using lineage metadata.
Yet another object of the invention is to expand the impacted training set through graph closure over entity relationships and semantic similarity searches, ensuring that both direct and related updates are included.
A further object of the invention is to generate a Model Consumption Pack – Delta (MCP-DELTA) that contains updated examples, invalidated counter-examples, contextual neighbors, and associated policy and lineage metadata in a versioned, auditable form.
An additional object of the invention is to apply parameter-efficient fine-tuning methods, such as adapters or LoRA, to update only lightweight modules of the model incrementally, thereby reducing cost and computation.
Another object of the invention is to evaluate fine-tuned models using regression tests, delta-sensitive checks, and safety validations, and to allow transactional promotion or rollback of updates based on gating policies.
A still further object of the invention is to periodically consolidate multiple MCP-DELTA artifacts into a refreshed MCP-Full baseline, preventing fragmentation and ensuring long-term stability and reproducibility of model updates.
SUMMARY
Before the present invention is described, it is to be understood that the present invention is not limited to specific methodologies and materials described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention.
The present invention describes a system and method for incrementally fine-tuning large language models using curated datasets maintained in a lineage-aware very large database. The system enables transactional, lineage-aware updates to large language models through delta-based fine-tuning, thereby improving freshness, reducing cost, and ensuring auditability. The system comprises an input unit, a processing unit, and an output unit, wherein the processing unit comprises a curation and ingestion module, a lineage-aware storage module, a change detection and impact analyzer module, a Model Consumption Pack – Delta (MCP-DELTA) builder module, a parameter-efficient fine-tuning module, an evaluation and gating module, and a roll-up and re-baseline module.
According to an aspect of the present invention, the method for lineage-aware incremental fine-tuning of large language models comprises the steps of: ingesting heterogeneous enterprise data into a curated very large database; curating the data into canonical training records with lineage metadata, embeddings, and graph relationships; detecting changes in the data via change data capture; and performing impact analysis to determine affected training records using lineage joins, graph closure, and semantic expansion.
According to an aspect of the present invention, the method further includes generating a Model Consumption Pack – Delta (MCP-DELTA) containing updated training examples, invalidated counter-examples, and contextual neighbors, along with a manifest including schema hashes, lineage slices, and policy decisions; applying parameter-efficient fine-tuning to update the model incrementally using the MCP-DELTA, with base parameters frozen and only adapters trained; mixing delta examples with replay buffers to prevent catastrophic forgetting; and storing training artifacts including metrics, checkpoints, and adapter binaries with registry pointers for auditable rollback.
According to an aspect of the present invention, the method further includes evaluating the updated model using regression suites, delta-sensitive tests, and safety checks; promoting or rolling back the updated model based on gating policies; and periodically consolidating multiple MCP-DELTA artifacts into a refreshed MCP-Full baseline to prevent fragmentation and ensure reproducibility.
BRIEF DESCRIPTION OF DRAWINGS
A complete understanding of the present invention may be made by reference to the following detailed description which is to be taken in conjugation with the accompanying drawing. The accompanying drawing, which is incorporated into and constitutes a part of the specification, illustrates one or more embodiments of the present invention and, together with the detailed description, it serves to explain the principles and implementations of the invention.
Fig. 1 illustrates the system architecture of the lineage-aware VLDB and ingestion pipeline.
Fig. 2 illustrates the delta detection and impact analysis flow.
Fig. 3 illustrates the MCP-DELTA generation and packaging process.
Fig. 4 illustrates the fine-tuning loop with parameter-efficient adapters.
Fig. 5 illustrates the evaluation, gating, and rollback cycle.
Fig. 6 illustrates the roll-up and re-baselining mechanism.
DETAILED DESCRIPTION OF INVENTION:
Before the present invention is described, it is to be understood that this invention is not limited to methodologies described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention. Throughout this specification, the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps. The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the invention to achieve one or more of the desired objects or results. Various embodiments of the present invention are described below. It is, however, noted that the present invention is not limited to these embodiments, but rather the intention is that modifications that are apparent are also included.
The present invention describes a system and method for incrementally fine-tuning large language models (LLMs) using curated datasets maintained in a lineage-aware very large database (VLDB). The system provides transactional updates to model knowledge through delta-based fine-tuning, thereby ensuring efficiency, freshness, auditability, and enterprise compliance.
According to the embodiment of the present invention, as described in FIG. 1, the system comprises an input unit, a processing unit, and an output unit, wherein the processing unit further comprises of a curation and ingestion module, a lineage-aware storage module, a change detection and impact analyzer module, a Model Consumption Pack – Delta (MCP-DELTA) builder module, a parameter-efficient fine-tuning module, an evaluation and gating module, and a roll-up and re-baseline module.
According to the embodiment of the present invention, ingestion and curation module performs ingestion and curation through connectors for legacy and modern sources. Connectors include JDBC/ODBC for relational databases, flat-file parsers for COBOL dumps, and API or Kafka-based ingestion for modern sources. The connectors standardize timestamps, encodings, and source identifiers. Transform and clean operations normalize units, resolve codes to canonical vocabularies, handle nullability, and enforce schema contracts. Schema alignment and entity resolution use deterministic keys such as customer_id and probabilistic matching such as name and address similarity, resulting in a canonical curated.records table. Quality scoring is applied using rule-based and machine learning anomaly checks, with below-threshold rows quarantined. An embedding service produces dense vectors per training unit which are ingested into a vector index. A graphify service emits entity and relation graph edges for dependency-aware reachability. A lineage writer records provenance edges from source_key to record_id, enabling SCD2 and as-of traversal.
According to the embodiment of the present invention, the curated dataset is maintained in a very large database optimized for mixed workloads including high-throughput ingestion, analytical scans, approximate nearest neighbor queries, and graph traversals. The lineage graph is query able for precise determination of which curated units came from which sources. Policy and catalog stores manage RBAC, PII tags, schema hashes, and training policy identifiers, thereby ensuring compliance and auditability.
According to the embodiment of the present invention, as illustrated in FIG. 2, the change detection and impact analysis module is designed to identify and expand the set of training records affected by upstream data changes. A CDC normalizer flattens heterogeneous change logs into a unified schema including source_key, change_type, change_ts, before, and after values. The change detector operates either on a time-windowed selection or through stream triggers. A lineage join maps source_key to record_id with as-of semantics. A graph closure expands the impacted set to k-hop neighbors to capture induced changes such as thresholds, aggregates, and rollups. A semantic expansion retrieves near-duplicates or paraphrases above similarity thresholds using vector search. The impacted set is deduplicated and classified into add, modify, or invalidate, with change magnitude features computed for weighting. Counter-examples are created for invalidated records to supersede obsolete knowledge.
According to the embodiment of the present invention, as illustrated in FIG. 3, the MCP-DELTA builder module constructs a delta training pack as a first-class, auditable artifact. Selection and weighting are applied to impacted examples. Each example is weighted using the formula:
wi = αri + βmi
where ri represents recency, mi represents change magnitude, and α and β are coefficients that determine their relative importance. This ensures that more recent and more impactful changes receive higher emphasis in fine-tuning. Privacy and policy enforcement are applied, including PII masking, redaction, or exclusion by policy. A replay selector samples 1–5% of data from the MCP-Full baseline to stabilize adapter training. A manifest generator produces a self-describing JSON manifest including source versions, schema hash, hyperparameters, evaluation suites, policy decisions, and lineage slice. The MCP-DELTA is packaged as a versioned artifact containing the manifest, examples, and context blobs, and stored in the MCP registry.
According to the embodiment of the present invention, as illustrated in FIG. 4, the parameter-efficient fine-tuning module applies incremental updates to the large language model using only the MCP-DELTA. Base parameters remain frozen, and adapters such as LoRA or IA3 are trained. Replay mixing is performed by combining delta examples with a small replay buffer in each batch. Adapter banks are maintained per domain or time slice and may be merged through low-rank summation or routed by metadata. The incremental fine-tuning loss is defined as:
L =1/N ∑ wi · ℓ (fθ,ϕ(xi),yi)+ μ ‖ϕ - ϕ*‖22(+ ν ΩEWC(θ))
where wi represents the weight of each training example combining recency and change magnitude, ℓ (fθ,ϕ(xi),yi) represents the prediction loss on input xi with ground truth yi, θ denotes frozen base model parameters, and & denotes trainable adapter parameters. The term μ ‖ϕ - ϕ*‖22 penalizes deviation from prior adapter parameters ϕ* to maintain stability across updates. The optional term ν ΩEWC(θ) represents an elastic weight consolidation penalty if any subset of the base parameters is unfrozen, thereby reducing catastrophic forgetting.
According to the embodiment of the present invention, as illustrated in FIG. 5, the evaluation and gating module governs the promotion or rollback of fine-tuned adapters. Regression suites include domain tasks such as exact match, F1, and ROUGE, as well as latency and throughput. Delta-sensitive tests assert the correctness of changed facts. Safety checks include PII protection, toxicity filtering, and jailbreak resistance. Promotion occurs only if delta accuracy meets or exceeds thresholds and critical metrics remain above baseline. If evaluation fails, rollback is performed instantaneously by switching the pointer in the model registry to the prior adapter.
According to the embodiment of the present invention, as illustrated in FIG. 6, the roll-up and re-baselining mechanism consolidates multiple MCP-DELTA artifacts into a refreshed MCP-Full. This compaction is analogous to log-structured merge compaction, preventing delta fragmentation and preserving reproducibility. Re-baselining may also be applied to rebuild a clean adapter or refresh base plus adapter models to stabilize long-term incremental training.
According to the embodiment of the present invention, the method for delta update incremental fine-tuning of large language models comprises the steps of:
● ingesting data from heterogeneous enterprise sources into a very large database;
● curating the data into canonical training records with lineage metadata, vector embeddings, and graph relationships;
● detecting changes in the data via change data capture;
● performing impact analysis to determine affected training records, including lineage joins, graph closure, and semantic expansion;
● generating a Model Consumption Pack – Delta (MCP-DELTA) containing updated training examples, counter-examples for invalidated data, and contextual neighbors, along with a manifest including schema hashes, policy decisions, and lineage slices;
● applying parameter-efficient fine-tuning to update the model incrementally using the MCP-DELTA, including replay mixing and adapter management;
● evaluating the updated model using regression tests, delta-sensitive tests, and safety checks;
● promoting or rolling back the updated model based on gating policies; and
periodically consolidating multiple MCP-DELTA artifacts into a refreshed MCP-Full baseline to prevent fragmentation and ensure reproducibility.
According to the embodiment of the present invention, illustrative data structures and schemas are defined as follows:
curated.records(record_id PK, entity_id, asof_ts, payload JSONB, labels JSONB, split ENUM, quality_score FLOAT)
lineage.graph(source_key, record_id, valid_from, valid_to, transform_id)
index.vector(record_id, embedding VECTOR, shard_id, ts)
index.graph(node_id,edge_type,neighbor_id,weight,ts)
mcp.registry(id PK, type ENUM{FULL,DELTA}, manifest JSONB, location URI, created_ts)
cdc.changes(source_key, change_type ENUM{I,U,D}, change_ts, before JSONB, after JSONB)
These schemas illustrate how curated training records, lineage information, vector embeddings, graph edges, MCP-DELTA registry entries, and change capture logs are represented in the very large database.
This architecture enables large language models to be fine-tuned in a lineage-aware, incremental, and auditable manner, ensuring that models remain aligned with evolving enterprise data while preserving efficiency, safety, and rollback capability.
According to the embodiment of the present invention, the said invention provides a lineage-aware incremental learning system that maintains a curated and semantically indexed dataset in a very large database, detects changes (deltas) in upstream data sources via change data capture (CDC), maps deltas to curated training records using lineage, expands the impacted set using graph closure (relational dependencies) and semantic similarity (embedding neighbors), generates a Model Consumption Pack – Delta (MCP-DELTA) containing updated examples, invalidated examples (counter-examples), and contextual neighbors, applies parameter-efficient fine-tuning (e.g., LoRA, adapters) to update the model incrementally, evaluates the updated model using regression tests, safety checks, and delta-sensitive tests, promotes or rolls back the update based on gating policies and periodically consolidates multiple MCP-DELTA into a refreshed MCP-Full baseline. This approach enables transactional updates to models analogous to database updates, thereby providing efficiency, freshness, auditability, and compliance.
Algorithms (Operational Details)
A. Impacted Set Construction
• Lineage join: index on (source_key, valid_from, valid_to) for as-of join.
• Graph closure: bounded BFS with type constraints; stop when hop limit K or confidence drops below θ.
• Semantic neighbors: ANN search (HNSW/IVF) with min similarity τ and max top-k; optional class-balance filter.
• Classification:
1. ADD: new curated row;
2. MODIFY: diff payload fields > ε or label change;
3. INVALIDATE: downstream deletes/supersedes → create counter-examples.
B. MCP-DELTA Selection & Packaging
• Weights: normalize to [0,1][0,1][0,1]; cap extremes; skew towards high business impact and recent changes.
• Counter-examples: generate with templates (“As of , is superseded by ”).
• Replay sampling: stratified by domain/task; ratio ρ (e.g., 0.02).
• Manifest: cryptographic hash of schema, policy decision log, lineage slice ID, hyperparameters, eval suite IDs.
C. PEFT Training & Stability
• Adapters: LoRA rank r (e.g., 8–64) per attention/MLP; optionally IA3.
• Optimizer: AdamW; gradient clipping; early stopping on DELTA-suite.
• Regularization: L2 distance to previous adapters; optional KL to reference outputs on replay set.
D. Gate & Rollback
• Gate policy: (DELTA-accuracy ≥ τDELTA) AND (all critical metrics ≥ baselines) AND (safety pass)
• Rollback: instantaneous pointer switch to previous adapter in Model Registry.
The uniqueness and inventive aspects of the system lie in several key features. It introduces lineage-driven CDC to model mapping, enabling surgical fine-tuning rather than relying on coarse RAG refresh or a complete retrain. Through dual expansion (graph and semantic), it effectively captures induced and near-duplicate effects, ensuring context-complete updates. MCP-DELTA is treated as a first-class, auditable training artifact, incorporating policy, lineage slice, and schema hashes. Transactional adapters (PEFT) are designed with promotion and rollback semantics that mirror database transactions. Governance-by-design is embedded by enforcing PII masking and policy compliance during the training artifact creation step, rather than addressing it post-hoc. Finally, the system supports roll-up and compaction of DELTAs into a new baseline, preventing entropy while preserving reproducibility.
Advantages -
Efficiency: Only retrains on delta changes, reducing computational cost by orders of magnitude compared to full retraining.
Freshness: Keeps large language model knowledge aligned with evolving enterprise data through transactional updates driven by change data capture.
Auditability: Generates MCP-DELTA as a first-class, versioned, and traceable training artifact with lineage slices, schema hashes, and policy decisions, ensuring reproducibility and compliance.
Safety and Compliance: Integrates PII masking and policy enforcement directly in the training artifact creation step, ensuring governance-by-design rather than post-hoc filtering.
Rollback Capability: Supports instantaneous rollback by switching registry pointers to prior adapters, enabling transactional control over model updates.
Enterprise Fit: Scales to billions of records across hybrid enterprise stacks, supporting heterogeneous ingestion, graph queries, semantic expansion, and high-throughput fine-tuning.
, C , Claims:We claim,
1. A system and method for incremental fine-tuning of a large language models
characterized in that
the system uses curated datasets maintained in a lineage-aware very large database (VLDB), such that
the system comprises an input unit, a processing unit, and an output unit, wherein the processing unit further comprises of a curation and ingestion module, a lineage-aware storage module, a change detection and impact analyzer module, a model consumption pack – delta (MCP-DELTA) builder module, a parameter-efficient fine-tuning module, an evaluation and gating module, and a roll-up and re-baseline module;
the method for delta update incremental fine-tuning of large language models comprises the steps of:
• ingesting data from heterogeneous enterprise sources into a very large database;
• curating the data into canonical training records with lineage metadata, vector embeddings, and graph relationships;
• detecting changes in the data via change data capture;
• performing impact analysis to determine affected training records, including lineage joins, graph closure, and semantic expansion;
• generating a Model Consumption Pack – Delta (MCP-DELTA) containing updated training examples, counter-examples for invalidated data, and contextual neighbors, along with a manifest including schema hashes, policy decisions, and lineage slices;
• applying parameter-efficient fine-tuning to update the model incrementally using the MCP-DELTA, including replay mixing and adapter management;
• evaluating the updated model using regression tests, delta-sensitive tests, and safety checks;
• promoting or rolling back the updated model based on gating policies; and
periodically consolidating multiple MCP-DELTA artifacts into a refreshed MCP-Full baseline to prevent fragmentation and ensure reproducibility.
2. The system and method as claimed in claim 1, wherein ingestion and curation module performs ingestion and curation through connectors that include JDBC/ODBC for relational databases, flat-file parsers for COBOL dumps, and API or Kafka-based ingestion for modern sources, such that the connectors standardize timestamps, encodings, and source identifiers and quality scoring is applied using rule-based and machine learning anomaly checks, with below-threshold rows quarantined, the embedding service produces dense vectors per training unit which are ingested into a vector index and graphify service emits entity and relation graph edges for dependency-aware reachability.
3. The system and method as claimed in claim 1, wherein the curated dataset is maintained in a very large database optimized for mixed workloads including high-throughput ingestion, analytical scans, approximate nearest neighbor queries, and graph traversals.
4. The system and method as claimed in claim 1, wherein the change detection and impact analysis module is designed to identify and expand the set of training records affected by upstream data changes and a CDC normalizer flattens heterogeneous change logs into a unified schema and the change detector operates either on a time-windowed selection or through stream triggers.
5. The system and method as claimed in claim 1, wherein the MCP-DELTA builder module constructs a delta training pack as a first-class, auditable artifact, selection and weighting are applied to impacted examples and MCP-DELTA further comprises a manifest including source versions, schema hash, privacy policies, and training parameters and MCP-DELTA includes counter-examples that supersede invalidated knowledge to enforce corrective behavior.
6. The system and method as claimed in claim 1, wherein the parameter-efficient fine-tuning module applies incremental updates to the large language model using only the MCP-DELTA and parameter-efficient fine-tuning comprises training low-rank adapters while keeping base model parameters frozen.
7. The system and method as claimed in claim 1, wherein the evaluation and gating module governs the promotion or rollback of fine-tuned adapters such that delta-sensitive tests assert the correctness of changed facts and safety checks include PII protection, toxicity filtering, and jailbreak resistance.
8. The system and method as claimed in claim 1, wherein promotion occurs only if delta accuracy meets or exceeds thresholds and critical metrics remain above baseline and if evaluation fails, rollback is performed instantaneously by switching the pointer in the model registry to the prior adapter.
9. The system and method as claimed in claim 1, wherein the roll-up and re-baselining mechanism consolidates multiple MCP-DELTA artifacts into a refreshed MCP-Full and this compaction is analogous to log-structured merge compaction, preventing delta fragmentation and preserving reproducibility and re-baselining may also be applied to rebuild a clean adapter or refresh base plus adapter models to stabilize long-term incremental training.
10. The system and method as claimed in claim 1, wherein the model registry supports versioning, rollback, and lineage-aware audit of fine-tuning artifacts.
| # | Name | Date |
|---|---|---|
| 1 | 202521083411-STATEMENT OF UNDERTAKING (FORM 3) [02-09-2025(online)].pdf | 2025-09-02 |
| 2 | 202521083411-POWER OF AUTHORITY [02-09-2025(online)].pdf | 2025-09-02 |
| 3 | 202521083411-FORM 1 [02-09-2025(online)].pdf | 2025-09-02 |
| 4 | 202521083411-FIGURE OF ABSTRACT [02-09-2025(online)].pdf | 2025-09-02 |
| 5 | 202521083411-DRAWINGS [02-09-2025(online)].pdf | 2025-09-02 |
| 6 | 202521083411-DECLARATION OF INVENTORSHIP (FORM 5) [02-09-2025(online)].pdf | 2025-09-02 |
| 7 | 202521083411-COMPLETE SPECIFICATION [02-09-2025(online)].pdf | 2025-09-02 |
| 8 | 202521083411-FORM-9 [26-09-2025(online)].pdf | 2025-09-26 |
| 9 | 202521083411-FORM 18 [01-10-2025(online)].pdf | 2025-10-01 |
| 10 | Abstract.jpg | 2025-10-08 |