Abstract: ABSTRACT TITLE – AN EXPLAINABLE DECISION ENGINE SYSTEM AND METHOD FOR AUTOMATED INTEGRATED DEVELOPMENT ENVIRONMENT (IDE) ACTIONS An explainable decision engine system and method for automated integrated development environment (IDE) actions using structured provenance, causal attribution, and generative AI rationale. The system (10) comprises an input unit (1), a processing unit (2), and an output unit (3). The processing unit (2) includes an IDE (21), data ingestion layer (22), feature extraction engine (23), message bus (24), predictive decision module PDM (25), edge fallback explainer (26), explainability layer EL (27), rationale generation engine RGE (28), presentation layer (29), audit and traceability (30), and learning feedback (31). The system captures developer context and ingests artifacts such as version control, issue tracker data, test coverage, and telemetry to construct a feature matrix. The PDM (25) predicts suggestions, with the EL (27) generating confidence calibration, feature attribution, causal graphs, and counterfactuals. The RGE (28) produces human-readable rationales, while modules (29–31) ensure transparent presentation, secure audit, and continuous learning.
Description:FIELD OF INVENTIONS
The present invention relates to generative AI-based assistance in integrated development environments (IDEs). More specifically, it relates to explainable decision engine system and method for automated integrated development environment (IDE) actions using structured provenance, causal attribution, and generative AI rationale.
BACKGROUND OF THE INVENTION
An integrated development environment (IDE) is an application that provides a comprehensive suite of tools designed to facilitate the process of application development which consolidates various development tools into a single, unified interface, aiming to simplify and streamline the coding, testing, and debugging processes. The development tools incorporated into integrated development environment (IDEs) primarily focuses on using artificial intelligence (AI) to provide helpful suggestions within an integrated development environment (IDEs) to provide helpful suggestions such as refactoring code, generating test cases, or adjusting task priorities.
In recent years the development in generative AI based systems has incorporated large language models (LLMs), Graph neural networks and reinforcement learning agents which are incorporated into integrated development environment enabling the system to provide automated support functions including code generation, refactoring, bug detection and assistance increasing the functionality.
The system generates suggestions together with supporting evidence, confidence measures, and clear human-readable rationales, which applies the principle within an integrated development environment (IDE), and transforms development artifacts into a feature matrix as model input. Further the integrated development environment (IDE) actions are generated through predictive model such as test case generation, backlog prioritization and code refactoring. Modern IDEs increasingly embed advanced AI models such as large language models, graph neural networks, and reinforcement learning agents to generate code, prioritize backlog items, and create test cases. While these capabilities significantly enhance developer productivity, they often operate as “black boxes,” providing suggestions without exposing the reasoning behind them.
This lack of transparency leads to critical challenges: developers may lose trust and override recommendations they cannot rationalize; compliance risks arise in regulated industries like medical and aerospace, where traceability of automated decisions is mandatory; debugging becomes more difficult as the root cause of defects introduced by AI suggestions remains obscured; and knowledge transfer is hindered, since new team members are unable to learn from or understand the rationale behind past AI-driven decisions. The invention solves the problem of artificial intelligence tools in integrated development environment IDEs acting as “black boxes” by providing clear, explainable, and auditable reasoning for each automated suggestion. It builds developer trust, supports compliance, and simplifies debugging through transparent, data-backed rationales.
Prior Art:
For instance, US20210042590A1 introduces a statistical process (nonparametric counting, Markov State models) to evaluate black-box model performance, redefine recall curves and generate explanations usings feature attribution and censoring for time-series models. Though it focuses on explainability of black-box machine learning models and provide explanations by attributing outcomes to input features by addressing transparency and evaluation of artificial intelligence decisions. It lacks the integration into integrated development environment for automated actions lacking human readable rationale generation via dual-decoder transformer, direct linkage between provenance artifacts like confidence score, casual graph, counterfactual to suggestions.
US11710034B2 defines a misuse index for explainable AI by mapping training data and inference uses, detecting discrepancies, and classifying them as misuses for better governance of machine learning (ML) systems. Though it is concerned with explainability and compliance in artificial intelligence (AI) systems and enables traceability of artificial intelligence (AI) decision making seeking the improvement in trustworthiness of artificial intelligence (AI) outputs, It lacks action recommendations into integrated development environment and does not generate real time, developer facing explanations tied to specific coding tasks, moreover dual rational engine and integration consisting development workflow is also what lacks including immutable audit logs.
US20230325666A1 describes a behavior modeling architecture for machine learning systems that enables safety assurance, real-time monitoring, and integration of verification constraints during training and prediction. Though it focuses on monitoring, verification, and safe operation of artificial intelligence systems, addresses explainability and trust for machine learning (ML) models, and provides formal methods for transparency it lacks orientation towards system safety and behavioral assurance, not an integrated development environment productivity tools. It also lacks human-readable rationale or causal explanations tailored for developers not having automated test-case generation, backlog prioritization, or refactoring.
Although the present systems consist of mechanisms for explainability, misuse detection, or behavioral modeling of machine learning models, they lack an integrated framework that directly operates within an integrated development environment (IDE) to provide automated developer-centric actions such as test-case generation, backlog prioritization, or code refactoring. They further do not couple these actions with a unified explainability layer that generates provenance artifacts (confidence score, feature-importance vector, causal graph, counterfactuals) and converts them into a concise, human-readable rationale via a dual-decoder transformer. Moreover, none of the prior art systems provide immutable audit logging to ensure compliance and traceability of automated suggestions. Hence, the present invention offers a novel, modular, and explainable decision engine uniquely tailored for developer workflows.
DEFINITIONS:
The expression “integrated development environment (IDE)” used hereinafter in the specification refers to a software platform that provides developers with tools such as code editors, debuggers, and build automation utilities, further enhanced in the present invention with artificial intelligence (AI) -driven suggestion and explainability modules.
The expression “predictive model” used hereinafter in the specification refers to any statistical, machine learning, or deep learning model that processes development artifacts to predict or recommend actions, such as test-case generation, backlog prioritization, or code refactoring.
The expression “provenance artifacts” used hereinafter in the specification refers to machine-generated metadata that trace the reasoning of predictive models, including input features, intermediate representations, and explainability outputs, thereby enabling transparency, compliance, and auditability.
The expression “predictive model” used hereinafter in the specification refers to any statistical, machine learning, or deep learning model that processes development artifacts to predict or recommend actions, such as test-case generation, backlog prioritization, or code refactoring.
The expression “explainability layer” used hereinafter in the specification refers to a system component that extracts interpretable artifacts from predictive models, including confidence scores, feature-importance vectors, causal attribution graphs, and counterfactual perturbations, to justify automated suggestions.
The expression “rationale generation engine (RGE)” used hereinafter on the specification refers to a component within the scope of integrated development environment (IDE) translates the underlying actions and states of an AI agent (or code assistant) into natural language explanations for developers.
The expression “explainable decision engine (EDE)” used hereinafter on the specifications refers to a decision-making system, often based on artificial intelligence (AI), that provides clear and understandable justifications for its decisions and action.
The expression “feature-importance vector (FIV)” used hereinafter in this specification refers to a structured output generated by the explainability layer (EL) that quantifies how much each input feature (e.g., code complexity, churn, ownership, coverage delta, bug density, etc.) contributed to the AI model’s suggestion in the IDE.
The expression “structural causal model (SCM)” used hereinafter in this specification refers to a mathematical framework for representing and reasoning about cause-and-effect relationships between variables. SCMs explicitly define causal dependencies to define which factors cause others, how interventions would change outcomes, and what counterfactual scenarios would look like.
OBJECTS:
The primary object of the invention is to provide an explainable decision engine system and method for automated integrated development environment (IDE) actions using structured provenance, causal attribution, and generative AI rationale.
Yet another object of the invention is to generate provenance artifacts including confidence scores, feature-importance vectors, causal attribution graphs, and counterfactual explanations.
Yet another object of the invention is to offer a modular, scalable architecture with a standardized artificial intelligence (AI) adapter interface for integrating multiple models.
Yet a further object of the invention to deliver each artificial intelligence (AI) suggestion with a transparent, human-readable rationale linked to specific data points and model confidence.
Yet a next object is to maintain compliance and traceability by recording all suggestions and their explanations in an immutable audit log and improve developer trust in artificial intelligence (AI) recommendations by making them explainable and verifiable.
Yet a next object is to support debugging and knowledge transfer through data-backed explanations of automated Integrated Development Environment (IDE) actions.
SUMMARY
Before the present invention is described, it is to be understood that the present invention is not limited to specific methodologies and materials described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention.
The present invention discloses an explainable decision engine system and method for automated integrated development environment (IDE) actions; comprising an input unit, a processing unit, and an output unit. Within the processing unit, key components include the integrated development environment, data ingestion layer, feature extraction engine, message bus, predictive decision module, edge fallback explainer, explainability layer, rationale generation engine, presentation layer, audit and traceability module, and learning feedback unit.
In an aspect, the IDE captures code edits and project context, while the data ingestion layer gathers version-control history, issue tracker metadata, test coverage reports, and runtime telemetry to assemble artifacts. The feature extraction engine transforms these into a feature matrix with complexity, churn, and other metrics, which the message bus distributes. The predictive decision module applies AI models to output suggestions, with the edge fallback explainer providing heuristics if the module fails. The explainability layer enhances transparency through confidence calibration, feature attribution, causal graphs, and counterfactuals. The rationale generation engine converts provenance data into human-readable rationales, while the presentation layer delivers suggestions, confidence gauges, and explanations to developers. The audit and traceability module records signed events in a secure ledger, and the learning feedback component loops developer responses into retraining datasets.
In a preferred aspect, the method begins with the IDE capturing user actions and emitting on-change events, followed by the ingestion of code and metadata into bundled artifacts. The feature extraction engine computes metrics and builds a feature matrix, which is published via the message bus. The predictive decision module generates a suggestion, or the edge fallback explainer provides a heuristic output in case of failure. When predictions are available, the explainability layer produces calibrated scores, feature importance, causal graphs, and counterfactuals, which are then fed into the rationale generation engine to produce a human-readable explanation. The presentation layer displays the suggestion and rationale in the IDE, while audit and traceability ensure tamper-proof recording, and learning feedback incorporates developer interactions into continuous model improvement.
BRIEF DESCRIPTION OF DRAWINGS
A complete understanding of the present invention may be made by reference to the following detailed description which is to be taken in conjugation with the accompanying drawing. The accompanying drawing, which is incorporated into and constitutes a part of the specification, illustrates one or more embodiments of the present invention and, together with the detailed description, it serves to explain the principles and implementations of the invention.
FIG.1. illustrates the structural and functional components of the system.
FIG.2. illustrates the stepwise workflow.
DETAILED DESCRIPTION OF INVENTION
Before the present invention is described, it is to be understood that this invention is not limited to methodologies described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention. Throughout this specification, the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps. The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the invention to achieve one or more of the desired objects or results. Various embodiments of the present invention are described below. It is, however, noted that the present invention is not limited to these embodiments, but rather the intention is that modifications that are apparent are also included.
The present invention describes explainable decision engine system and method for automated integrated development environment (IDE) actions using structured provenance, causal attribution, and generative AI rationale; wherein the system (10) comprises of an input unit (1), a processing unit (2), further in processing unit comprises an integrated development environment IDE (21), a data ingestion layer (22), a feature extraction engine (23), a message bus (24), a predictive decision modular PDM (25), an edge fallback explainer (26), an explainability Layer EL (27), a rationale generation engine RGE (28), a presentation layer (29), an audit and traceability (30), a learning feedback (31) and an output unit (3).
In an embodiment of the invention, an integrated development environment (21) includes any modules development environment (e.g., VS Code, Intellij, Eclipse) in which developer edits or saves file, capture context that is file, project, branch and cursor further emit on-change event. The data ingestion layer (22) that collects source-code snapshots, version-control history, test-coverage reports, issue-tracker metadata, and runtime telemetry, such that it fetches version-control and diffs, fetches issue tracker items and labels, fetches test coverage reports and fetches runtime telemetry and then assembles artifacts bundle.
In the next embodiment of invention, the processing unit (2) includes feature extraction engine (23) that transformers raw artifacts into a feature matrix (FM) (e.g., numeric, categorical, graph-based features), where the feature extraction engine (23) computes metrics (complexity, churn, ownership, static graph, smells, coverage deltas and bug density); and the message bus (24) publishes feature matrix (FM) (protobuf). Further, a predictive decision module (PDM) (25) is enabled to predict feature matrix including one or more models such as large language model (LLMs), graph neural network (GNNs) or reinforcement-learning policy that outputs a suggestion object (SO) including type, target artifact, or parameters).
In yet next embodiment of the invention, where the PDM (25) is not- functional, or it is timeout then the output proceeds to edge fallback explainer (26) which gives heuristic suggestion and short explanation and mark fallback_flag as true and shows low confidence the system enables explainability layer (27); and where the predictive decision module (PDM) (25) is available and functional; the system enables the explainability layer (27) that intercepts the PDM output and extracts explainability artifacts.
In yet a next embodiment of the invention, the explainability layer (EL) (27) is configured to perform four distinct operations of the raw model output such as:
1. Confidence calibration using a confidence score (CS) module that calibrates the probability (e.g., via temperature scaling) wherein the raw logits are passed through temperature scaling and isotonic regression to produce a well calibrated probability c ∈ [0,1];
2. Feature attribution wherein the explainability layer (27) builds a feature-importance vector (FIV) that computes with SHAP –values ϕ_i for each feature i in the feature matrix integrated gradients or attention-weight aggregation such that
FIV = {(fi, ∅i) | i = 1..N};
3. Causal attribution graph (CAG), a directed acyclic graph G(V,E) where each node v ∈ V represents a feature or latent variable, and each edge e ∈ E encodes a causal coefficient derived from an SCM learned offline, built using structural casual models that link input features to the decision such that the edge weights quantify the proportion of decision influence.
4. Counterfactual generation, wherein a counterfactual set (CFS) enable minimal perturbations that would flip the decision, generated by a gradient-based optimizer; such that the formula
min || Δx ||2 s.t. model (x + Δx) ≠ original decision
Δx
yields the smallest perturbation Δx that would reverse the suggestion; where the resulting Δx is rendered as a What If snippet in the HRR. Furthermore, all provenance data are serialized as Protocol Buffers and stored in the immutable audit log (30).
According to an exemplified embodiment of the invention, the feature importance vector (FIV) enables feature extraction where raw development artifacts (like version control diffs, issue tracker labels, runtime telemetry) are transformed into numerical or categorical features.
For example:
File complexity score = 0.75
Recent churn (lines changed in last week) = 120
Coverage delta = -10%
Ownership spread (number of authors) = 5
The system enables model prediction wherein a predictive decision module (PDM) uses these features to output a suggestion (e.g., “Add unit tests for processData()”); followed by attribution wherein the explainability layer backtracks to measure how strongly each feature influenced that specific suggestion. Using SHAP values assigns contribution values to each feature by comparing against baseline feature subsets; and using integrated gradients computes attribution by integrating gradients of the model’s output with respect to each input feature; followed by using attention weights that measures which features the model focused on most in decision-making. Finally, an output vector is configured to assemble these importance values into a feature-importance vector (FIV) that pairs each feature with its contribution score. The FIV provides transparency, where the developers see why the system suggested an action, not just the output; auditability, where the FIV is stored in the immutable audit log along with other provenance artifacts; debugging aid wherein if a poor suggestion arises, developers can inspect whether the wrong feature was over-weighted; a human-readable rationale where the FIV feeds into the rationale generation engine (RGE), which converts it into plain-language explanations.
In yet a next embodiment of the invention, the rationale generation engine (28) acts as a dual-decoder transformer (e.g., T5-XL) fine-tuned on a corpus of developer-written explanations where the encoder receives the concatenated provenance tensors (SC, FIV, CAG, CFS) and the decoder produces a human-readable rationale (HRR). The RGE (28) receives a provenance tensor (PT) constructed by concatenating:
c- confidence scalar,
FIV- vector of length N padded/truncated to a fixed size K,
flattened adjacency matrix of CAG (size M × M), and
counterfactual description (embedded via a small encoder); such that the PT is linearly projected to a latent embedding e ∈ ℝ^d and fed to the encoder of the transformer. Further, the decoder, conditioned on a “explain” token, generates a sequence of tokens t_1 … t_L that form the human readable rationale (HRR). Training data for the RGE consist of paired (provenance, developer written explanation) examples harvested from; Code review comments (GitHub PRs), issue tracker discussion threads (Jira, Azure DevOps), internal documentation (Confluence); and the loss function combines cross entropy for language generation with a semantic similarity penalty (e.g., BERTScore) to enforce factual alignment with the provenance.
In yet next embodiment of the invention, presentation layer (29) resembles UI widgets within the IDE where each suggestion appears as a pop-up widget containing:
- suggestion summary comprising its concise action (e.g., add unit test for processData);
- a confidence gauge with colored bar representing the confidence scores as green >0.9, yellow 0.6-0.9, and red< 0.6),
- a rationale text where the HRR is generated by the RGE, and
- a “Why?” button that expands a panel showing the FIV table, CAG diagram (interactive), and counterfactual “What-If” scenario. Developers may accept, reject or edit the suggestion; each interaction is logged with the original provenance for future model fine-tuning (closed-loop learning).
In yet next embodiment of the invention, the audit and traceability module (30) records all events as signed audit records SARs in an append-only ledger comprising the record_id, timestamp, user_id, ide_version, suggestion, confidence, feature_importance, casual_graph, rationale, and action taken; such that the ledger can be anchored to a public blockchain hash every 24 h to guarantee non-repudiation for compliance-heavy domains. It also append SAR to append-only ledger it is optional that is periodic anchoring of ledger root hash\non public blockchain for non-repudiation. The system keeps a secure audit trail (SARs) inside an append-only ledger. To make it tamper-proof and legally defensible, a hash of the ledger is periodically published to a public blockchain, which ensures non-repudiation so that no one can alter past records or deny their existence.
In yet a next embodiment of the invention, the learning feedback (31) emits accept/reject/edit signal to training queue and store FM+SO+ outcomes for offline retraining provided to the user by the system (10) through an output unit (3).
In a preferred embodiment of the invention, the sequence diagram illustrated in Fig. 1. includes the steps including
- entering the user query and context by the developer,
- routing the query to ranking by the system,
- fetching fetches user, entity, and contextual features by the engine,
- building and updating the feature matrix,
- running prediction scenarios with updated features,
- generating a suggested object (decision option + type + info + KPI)
- returning suggestion response to the presentation layer,
- invoking explainability layer for explanation,
- collecting evidence, narrative chain, and confidence scores,
- appending explainability results to the suggestion,
- generating supporting evidence for prediction confidence,
- building process narrative of model reasoning,
- presenting explanation and results to the UI,
- logging prediction, explanation, and outcomes in audit trail,
- capturing user feedback on suggestions and explanations,
- appending logs to append-only ledger for traceability, and
- anchoring stage logs for secure audit and compliance.
According to a preferred embodiment of the invention, the method for automated integrated development environment (IDE) actions as illustrated in FIG.2. includes the steps of:
Stage:1 Integrated Development Environment (21)
- Edit or save a file.
- Capture the file, project, branch, and cursor context in the IDE.
- Send an on-change event from the IDE.
Stage:2 Data Ingestion Layer (22)
- Pull the current code snapshot and metadata.
- Fetch version control history and file differences.
- Fetch issue tracker items and labels.
- Fetch test coverage reports.
- Fetch runtime telemetry KPIs.
- Assemble all artifacts into a bundle.
Stage:3 Feature Extraction Engine (23)
- Compute metrics like complexity, churn, ownership, smells, coverage deltas, and bug density using the feature extraction engine.
- Construct or update the feature matrix.
Stage:4 Message Bus
- Publish the feature matrix on the message bus.
Stage: 5 Predictive Decision Module PDM (24)
- Generate a suggestion object and raw confidence values using the predictive decision module. Check whether the prediction module responded successfully. (Yes/No)
Stage: 6 Edge Fallback Explainer (26)- If PDM not available that is timeout
- Provide a heuristic suggestion with a short explanation through a fallback explainer if the prediction module fails.
- Flag the fallback as low confidence.
Stage: 7 Explainability Layer EL (27)- If PDM is available and ok
- Calibrate the confidence score through the explainability layer if the prediction is available.
- Compute feature attributions to create a feature importance vector.
- Run causal inference to build a causal attribution graph.
- Generate minimal counterfactuals showing how the decision could change.
- Package and serialize all explainability outputs.
Stage: 8 Rationale Generation Engine RGE (28)
- Build a provenance tensor from the suggestion and explainability data using the rationale generation engine.
- Generate a human-readable rationale.
RGE (28) (Stage 8) and Edge Fallback Explainer meets (26) (Stage 6) the same stage that is Presentation Layer (29) (Stage 9)
Stage: 9 Presentation Layer (29)
- Present the suggestion, confidence gauge, and rationale to the developer in the IDE.
- Provide a panel in the IDE showing feature importance, causal graph, and counterfactual views.
- Show a caution badge or require confirmation in the IDE if the confidence is low.
- Apply the suggested action, such as inserting tests or refactoring code, if the developer accepts.
- Apply the suggestion with modified parameters if the developer edits.
- Dismiss the suggestion without changes if the developer rejects.
Stage: 10 Audit and Traceability (30)
- Construct an audit record with all artifacts, hashes, timestamps, and user actions.
- Append the audit record to an append-only ledger.
- Anchor the ledger root hash on a public blockchain periodically.
Stage: 11 Learning Feedback (31)
- Send accept, edit, or reject signals to the training queue.
- Store the feature matrix, suggestion, and outcome for future retraining.
- End the runtime process.
While considerable emphasis has been placed herein on the specific elements of the preferred embodiment, it will be appreciated that many alterations can be made and that many modifications can be made in preferred embodiment without departing from the principles of the invention. These and other changes in the preferred embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the invention and not as a limitation. , Claims:CLAIMS:
We claim,
1. An explainable decision engine system and method for automated integrated development environment (IDE) actions;
wherein the system (10) comprises of an input unit (1), a processing unit (2) comprising an integrated development environment IDE (21), a data ingestion layer (22), a feature extraction engine (23), a message bus (24), a predictive decision modular PDM (25), an edge fallback explainer (26), an explainability Layer EL (27), a rationale generation engine RGE (28), a presentation layer (29), an audit and traceability (30), a learning feedback (31) and an output unit (3);
characterized in that:
the method for automated integrated development environment (IDE) actions comprising the steps of;
- editing or saving a file,
- capturing the file, project, branch, and cursor context in the IDE,
- sending an on-change event from the integrated development environment (IDE);
- pulling the current code snapshot and metadata by the data ingestion layer,
- fetching version control history and file differences,
- fetching issue tracker items and labels,
- fetching test coverage reports,
- fetching runtime telemetry KPIs, and
- assembling all artifacts into a bundle by the data ingestion layer (22);
- computing metrics like complexity, churn, ownership, smells, coverage deltas, and bug density using the feature extraction engine,
- constructing or updating the feature matrix by the feature extraction engine (23);
- publishing the feature matrix by the message bus (24);
- generating a suggestion object and raw confidence values using the predictive decision module (25),
- checking whether the predictive decision module (25) responded successfully;
- enabling edge fallback explainer (26) if PDM is timeout,
- providing a heuristic suggestion with a short explanation through a fallback explainer (26) if the prediction module fails,
- flagging the fallback as low confidence;
- enabling the explainability layer (EL) (27) if PDM is available and ok,
- calibrating the confidence score through the explainability layer (27) if the prediction is available,
- computing feature attributions to create a feature importance vector,
- running causal inference to build a causal attribution graph,
- generating minimal counterfactuals showing how the decision could change,
- packaging and serializing all explainability outputs;
- building a provenance tensor from the suggestion and explainability data using the rationale generation engine (28),
- generating a human-readable rationale by the rationale generation engine (RGE) (28);
- combining RGE (28) and edge fallback explainer (26) at the presentation layer (29);
- presenting the suggestion, confidence gauge, and rationale to the developer in the IDE,
- providing a panel in the IDE showing feature importance, causal graph, and counterfactual views;
- showing a caution badge or require confirmation in the IDE if the confidence is low;
- applying the suggested action, such as inserting tests or refactoring code, if the developer accepts,
- applying the suggestion with modified parameters if the developer edits,
- dismissing the suggestion without changes if the developer rejects by the presentation layer (29);
- constructing an audit record with all artifacts, hashes, timestamps, and user actions,
- appending the audit record to an append-only ledger,
- anchoring the ledger root hash on a public blockchain periodically by the audit and traceability (30);
- sending accept, edit, or reject signals to the training queue,
- storing the feature matrix, suggestion, and outcome for future retraining,
- ending the runtime process by the learning feedback layer (31).
2. The system and method as claimed in claim 1, wherein the feature extraction engine (23) transformers raw artifacts into a feature matrix (FM), computes metrics with respect to complexity, churn, ownership, static graph, smells, coverage deltas and bug density; and the message bus (24) publishes feature matrix (FM) (protobuf).
3. The system and method as claimed in claim 1, wherein the predictive decision module (PDM) (25) is enabled to predict feature matrix including one or more models such as large language model (LLMs), graph neural network (GNNs) or reinforcement-learning policy that outputs a suggestion object (SO) including type, target artifact, or parameters.
4. The system and method as claimed in claim 1, wherein the output proceeds to edge fallback explainer (26) if the PDM is timeout thereby giving heuristic suggestion and short explanation and mark fallback_flag as true, showing low confidence; and the output proceeds to explainability layer (27) when the predictive decision module (PDM) (25) is available and functional.
5. The system and method as claimed in claim 1, wherein the explainability layer (EL) (27) performs confidence calibration using a confidence score (CS) module that calibrates the probability via temperature scaling; feature attribution which builds a feature-importance vector (FIV) that computes with SHAP –values ϕ_i for each feature i in the feature matrix integrated gradients or attention-weight aggregation; causal attribution graph (CAG) providing G(V,E) where each node v ∈ V represents a feature or latent variable, and each edge e ∈ E encodes a causal coefficient derived from an SCM learned offline; and a counterfactual generation, wherein a counterfactual set (CFS) enable minimal perturbations that would flip the decision, generated by a gradient-based optimizer yielding the smallest perturbation Δx that would reverse the suggestion.
6. The system and method as claimed in claim 1, wherein the rationale generation engine (28) acts as a dual-decoder transformer that fine-tunes on a corpus of developer-written explanations where the encoder receives the concatenated provenance tensors and the decoder produces a human-readable rationale (HRR).
7. The system and method as claimed in claim 1, wherein the presentation layer (29) resembles UI widgets within the IDE where each suggestion appears as a pop-up widget containing:
- suggestion summary comprising its concise action;
- a confidence gauge with colored bar representing the confidence scores as green >0.9, yellow 0.6-0.9, and red< 0.6);
- a rationale text where the HRR is generated by the RGE, and
- a “Why?” button that expands a panel showing the FIV table, CAG diagram (interactive), and counterfactual “What-If” scenario.
8. The system and method as claimed in claim 1, wherein the audit and traceability module (30) record all events as signed audit records SARs in an append-only ledger comprising the record_id, timestamp, user_id, ide_version, suggestion, confidence, feature_importance, casual_graph, rationale, and action taken.
9. The system as claimed in claim 1, wherein the learning feedback (31) emits accept/reject/edit signal to training queue and store FM+SO+ outcomes for offline retraining provided to the user by the system (10) through an output unit (3).
| # | Name | Date |
|---|---|---|
| 1 | 202521083406-STATEMENT OF UNDERTAKING (FORM 3) [02-09-2025(online)].pdf | 2025-09-02 |
| 2 | 202521083406-POWER OF AUTHORITY [02-09-2025(online)].pdf | 2025-09-02 |
| 3 | 202521083406-FORM 1 [02-09-2025(online)].pdf | 2025-09-02 |
| 4 | 202521083406-FIGURE OF ABSTRACT [02-09-2025(online)].pdf | 2025-09-02 |
| 5 | 202521083406-DRAWINGS [02-09-2025(online)].pdf | 2025-09-02 |
| 6 | 202521083406-DECLARATION OF INVENTORSHIP (FORM 5) [02-09-2025(online)].pdf | 2025-09-02 |
| 7 | 202521083406-COMPLETE SPECIFICATION [02-09-2025(online)].pdf | 2025-09-02 |
| 8 | Abstract.jpg | 2025-09-26 |
| 9 | 202521083406-FORM-9 [26-09-2025(online)].pdf | 2025-09-26 |
| 10 | 202521083406-FORM 18 [01-10-2025(online)].pdf | 2025-10-01 |