Abstract: ABSTRACT Title: SYSTEM AND METHOD FOR DETECTING AND SCORING BIAS IN AGENTIC WORKFLOWS USING HYBRID DETERMINISTIC AND GENERATIVE ANALYSIS The present invention discloses a system (10) and method for detecting and scoring bias in agentic workflows using hybrid deterministic and generative analysis. The system comprises an input unit (1), processing unit (2) further including: an agent workflow instrumentation layer (21) for logging workflow data; a bias fingerprinting model-compute-platform (MCP) module (22) with submodules for keyword balance (22.1), source diversity (22.2), temporal anchoring (22.3), sentiment tilt (22.4), and entity frequency skew (22.5); an LLM-based reflective bias evaluator (23); a composite scoring engine (24); a visualization module (25); and a feedback and prompt rewriting module (26), followed by an output unit (3). The method includes capturing execution metadata, performing deterministic and reflective bias evaluations, aggregating scores, visualizing bias heatmaps and reports, and providing prompt refinements. This hybrid approach enables transparent, multi-dimensional bias assessment and mitigation in AI agent workflows.
Description:FIELD OF INVENTION
The present invention relates to software development, agentic workflows, orchestration. More specifically, it pertains to a system and method for detecting and scoring bias in agentic workflows using hybrid deterministic and generative analysis.
BACKGROUND
Agentic workflows powered by large language models (LLMs), such as those built using LangChain or AutoGen, enable complex task automation through a combination of prompts, tools, memory, and application program interface (APIs) calls and LLM steps to generate comprehensive outputs. While these systems enhance efficiency and adaptability, there are limitations such as various forms of bias introduced during execution, which current frameworks do not have a clear way to find or fix these biases.
In conventional systems, bias detection is either model centric or task specific and often limited to single-prompt evaluations. Despite their ability to chain actions and manage memory, these systems lack visibility into the points at which bias is introduced within the workflow. They lack transparency, step-by-step accountability, and contextual bias tracking, especially in interactions involving memory reuse, source citations, or iterative reasoning. Furthermore, they do not provide structured scoring or actionable insights for mitigating such bias across the agent’s lifecycle. However, these systems remain prone to various forms of bias, including source bias, temporal bias, sentiment or affective bias, representation bias and instructional or prompt-based bias.
Prior Arts:
202347000363 discloses a computer-implemented method for detecting and monitoring bias in an application includes index training data and obtaining a plurality of correlation values of one or more features in the indexed training data with a target variable. For each of the one or more features, a first value and a favourable result and a second value along with the unfavourable result is calculated. An absolute value of a difference between the calculated first value and the calculated second value is calculated. A total sum of the calculated absolute value of the plurality of correlation values of the one of the one or more features is calculated.
WO2022008677A1 discloses an exemplary embodiment may present methods for detecting bias both globally and locally by harnessing the white-box nature of the explainable artificial intelligence, Neural Nets, Interpretable Neural Nets, Transducer Transformers, Spiking Nets, Memory Net, and Reinforcement Learning models. Methods for detecting bias, strength, and weakness of data sets and the resulting models may be described. A first exemplary method presents a global bias detection which utilizes the coefficients of the explainable model to identify, minimize, and/or correct any potential bias within a desired error tolerance. A second exemplary method makes use of local feature importance extracted from the rule-based model coefficients to identify any potential bias locally. A third exemplary method aggregates the feature importance over the results/explanations of multiple samples. A fourth exemplary method presents a method for detecting bias in multi-dimensional data such as images. Further, a back map reverse indexing mechanism may be implemented. Several mitigation methods are also presented to eliminate bias from the affected models.
Whereas the first aforementioned prior art detects bias using statistical correlations in training data, it lacks dynamic, workflow-aware, and semantic bias evaluation capabilities. The second prior art discloses a system focus on bias detection using the internal structure of interpretable models such as coefficients, rules, or feature maps. None of the prior arts address the need to find and reduce bias. They focus only on training data selection or static data analysis and do not handle real-time bias that can appear in prompts, sources, or tool usage. Moreover, the above prior arts do not follow a standardize or extensible approach to identify and mitigate biases within complex agent workflows.
To overcome these drawbacks, there is a need for a novel, deterministic, tool orchestration system that can track and explain bias during the execution of agent workflows, using clear rules and language model analysis. Present invention provides standardized or extensible approach and to identify, mitigate biases within complex workflows. It enables explainable agentic workflow and supports ethical automation at scale.
DEFINITIONS
The expression “system” used hereinafter in this specification refers to an ecosystem comprising, but not limited to input and output devices, processing unit, plurality of mobile devices or a mobile device-based application. It is extended to computing systems like mobile phones, laptops, computers, PCs, and other digital computing devices.
The expression “input unit” used hereinafter in this specification refers to, but is not limited to, mobile, laptops, computers, PCs, keyboards, mouse, pen drives or drives.
The expression “output unit” used hereinafter in this specification refers to, but is not limited to, an onboard output device, a user interface (UI), a display unit, a local display, a screen, a dashboard, or a visualization platform enabling the user to visualize the graphs provided as output by the system.
The expression “model-compute platform” (MCP) used hereinafter in this specification refers to a framework that provides the required infrastructure and tools to handle a wide range of workloads, from basic apps to complicated computational processes, and can be deployed on-premises, in the cloud, or at the edge. By underpinning software processes, compute platforms help businesses streamline and adapt.
The expression “Large Language Models” or “LLMs” used hereinafter in this specification refers to systems that use natural language understanding to interpret and generate text. In this system, they help extract features and suggest metrics.
The expression “scoring bias” used hereinafter in this specification refers to a systematic error in a scoring system that unfairly favors or disfavours certain inputs or groups.
The expression “orchestration” refers to the automated coordination and management of multiple processes, systems, and services to execute a larger workflow or process. It involves streamlining and optimizing the execution of repeatable tasks across different systems, often involving complex workflows and dependencies.
The expression “APIs” used hereinafter in this specification refer to an application programming interface) is a set of rules and specifications that allows different software systems to communicate and interact with each other.
The expression “deterministic tool” used hereinafter in this specification refer to a system which when given the same input, will always produce the same output and follow the same execution path. This predictability is crucial for many applications, including testing, debugging, and achieving reliable performance in critical systems.
The expression “agentic workflow” used hereinafter in this specification refer to workflows leverage core components of intelligent agents such as reasoning, planning and tool use to execute complex tasks efficiently.
The expression “natural language processing” or “NLP” used hereinafter in this specification refer to a computer program designed to understand, interpret, and generate human language, both written and spoken.
The expression “uniform resource locator” or “URLs” used hereinafter in this specification refer to, essentially the web address that tells a browser where to find and retrieve a resource. URLs are crucial for navigating the web and accessing online content.
The expression “named entity recognition” or “NER” used hereinafter in this specification refer to a crucial aspect of natural language processing (NLP) that involves identifying and classifying named entities within text into predefined categories like people, organizations, locations, dates.
The expression “bias analysis ledger” used hereinafter in this specification refers to a crucial internal component that acts as a comprehensive log or record of all agentic workflow activities, specifically structured to support bias detection and scoring.
OBJECTS OF THE INVENTION
The primary object of the invention is to provide a system and method for detecting and scoring bias in agentic workflow using hybrid deterministic and generative analysis.
Another object of the invention is to provide a traceable logging mechanism for capturing every agent interaction.
Yet another object of the invention is to provide a deterministic set of MCP (model compute platform) for analysing bias across several dimensions like keywords, sources, sentiments, temporal.
Yet another object of the present invention is to provide a reflective LLM-based analysis step to detect nuanced and emergent biases.
Yet another object of the invention is to provide a bias scoring engine that aggregates results and produces a composite bias score per agent step and globally.
Yet another object of the invention is to provide a visual dashboard and feedback loop to guide mitigation.
SUMMARY
Before the present invention is described, it is to be understood that the present invention is not limited to specific methodologies and materials described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention.
The system (10) for detecting and scoring bias in agentic workflows comprises an input unit (1) and an output unit (3), along with a processing unit (2) that includes several key modules; wherein the agent workflow instrumentation layer (21) logs each workflow step with associated metadata. The bias fingerprinting model-compute-platform (MCP) module (22) is composed of five submodules: keyword balance (22.1), source diversity and bias index (22.2), temporal anchoring (22.3), sentiment tilt detector (22.4), and entity mention frequency skew (22.5). A reflective bias evaluation module (23) employs large language models (LLMs) to assess emergent bias. The bias aggregation and composite scoring engine (24) integrates scores from both deterministic and generative modules. A bias reporting and visualization module (25) displays structured findings, while a feedback and prompt rewriting module (26) suggest mitigations to reduce bias in future agentic interactions.
The method begins with user input through the input unit (1), triggering the instrumentation layer (21) to log all agentic workflow steps and metadata. The logged data is analyzed by the MCP module (22) through deterministic techniques, generating scores across multiple bias dimensions. Parallelly, the LLM-based reflective agent (23) evaluates responses for nuanced bias types and assigns severity scores. These results are then combined using the scoring engine (24) to compute a unified, weighted bias score at both step and workflow levels. The visualization module (25) presents these findings through dashboards, heatmaps, and graphs. Finally, the feedback module (26) offers actionable suggestions, such as prompt rewriting or model/tool changes, to help mitigate identified biases in future workflows.
The system and method offer significant advantages as it enables auditable bias tracing in otherwise opaque agentic workflows; offers a hybrid of deterministic and generative techniques; is adaptable to any LLM system with plugin/tool integration; facilitates regulatory and ethical compliance in AI systems; and supports continuous improvement through human-in-the-loop adjustments.
BRIEF DESCRIPTION OF DRAWINGS
A complete understanding of the present invention may be made by reference to the following detailed description which is to be taken in conjugation with the accompanying drawing. The accompanying drawing, which is incorporated into and constitutes a part of the specification, illustrates one or more embodiments of the present invention and, together with the detailed description, it serves to explain the principles and implementations of the invention.
FIG.1. illustrates a schematic representation of the structural and functional components of the system.
FIG.2. illustrates a system architecture overview.
FIG.3. illustrates bias fingerprinting MCP breakdown.
FIG.4. illustrates reflective LLM bias agent flow.
FIG.5. illustrates composite bias score aggregation.
FIG.6. illustrates bias reporting dashboard components.
FIG.7. illustrates prompt rewrite feedback loop.
DETAILED DESCRIPTION OF INVENTION
Before the present invention is described, it is to be understood that this invention is not limited to methodologies described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention. Throughout this specification, the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps. The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the invention to achieve one or more of the desired objects or results. Various embodiments of the present invention are described below. It is, however, noted that the present invention is not limited to these embodiments, but rather the intention is that modifications that are apparent are also included.
The present invention describes a system and method for detecting and scoring bias in agentic workflows using hybrid determination and generative analysis. The system (10) comprises of an input unit (1), processing unit (2) further comprising of agent workflow instrumentation layer (21), bias fingerprinting model-compute-platform (MCP) module (22), large language model (LLM) based reflective bias evaluation (23), bias aggregation and composite scoring engine (24), bias reporting and visualization modular (25), feedback and prompt rewriting modular (26) and an output unit (3); where all the structural and functional components work in co-ordination to employ a method for detecting and scoring bias in agentic workflows.
In an embodiment of the invention, an agent workflow instrumentation layer (21) is configured to log each step in the agentic workflow with unique identifiers and metadata including, but not limited to prompt content and structure, response content, time executor, name and version of the model used, list of tools such as model-compute-platform (MCP) invoked and the memory context or prior messages. This comprehensive logging forms the bias analysis ledger further used by bias fingerprinting MCP module (22).
In yet another embodiment of the invention, the bias analysis ledger is generated by the agent workflow instrumentation layer (21) and contains metadata for each step in the agentic workflow including prompt content and structure, model response content, timestamp of execution, agent/model identity such as name or version, tools and APIs invoked via the MCP and memory context such as reused information or prior messages. The bias analysis ledger is configured to record regular activity log and supports bias detection across multiple dimensions in downstream modules like the bias fingerprinting MCP (22), which performs keyword, sentiment, source, and temporal bias checks, the reflective LLM module (23) which does introspective analysis. The bias analysis ledger works as a blockchain, thereby storing immutable, detailed records that allow step-wise bias auditing, correlation of bias events with specific agents, prompts, tools, or memory reuse, or feed-forward insights to improve future prompts or tooling.
In the next embodiment of the invention, the bias fingerprinting MCP module (22) as illustrated in fig. 3. is a deterministic analytical engine that operates independently on the bias analysis ledger to systematically evaluate bias across the further dimensions such that:
a. A keyword balance (22.1) module uses natural language processing (NLP) techniques to extract dominant thematic keywords from the agent's responses and compares their distribution against a balanced reference corpus, thereby scoring the presence of polarizing or ideologically loaded language.
b. A source diversity and bias index (22.2) focuses on the range and nature of cited sources by analysing URLs or named entities, scoring how diverse or homogeneous the information sources are using media bias and geographical origin databases, and flagging the overuse of homogenous sources.
c. The temporal anchoring (22.3) module checks the timestamps of cited content to assess overemphasis on recency, identifying whether the agent is disproportionately referencing recent information, which may distort historical or long-term context.
d. The sentiment tilt detector (22.4) applies deterministic sentiment analyzer tools such as Vader or TextBlob, which compute sentiment polarity scores to detect emotionally skewed or one-sided language in responses.
e. Finally, the entity mention frequency skew (22.5) module applies Named Entity Recognition (NER) to extract people, organizations, and locations, and evaluates disproportionate emphasis, checking whether certain entities are overrepresented or systematically omitted. Each of these modules generates a bias score on a scale from [0,1], with 1 indicating higher scores indicating stronger bias in that particular dimension.
In yet a next embodiment of the invention, LLM-based reflective bias agent (23), where it is prompted to introspect and assess its own output or that of another model. It evaluates potential forms of bias like representation, sentiment, or source reliance and returns a structured output that includes the type of bias, a severity score from 0 (neutral) to 1 (high bias), and a brief explanation of each type. This can be done either by the same LLM that generated the response or a neutral third-party LLM such as Claude or GPT for better objectivity.
In yet a next embodiment of the invention, the bias aggregator and composite scoring engine (24) combines the scores from the deterministic MCP modules and the reflective LLM evaluation to calculate a final composite bias score. It uses a weighted average approach, where weights (which can be default or learned from domain-specific feedback) determine the influence of each bias type on the final score. The outputs include: step-level bias scores (per agent action), a full bias profile for the agent, an overall score summarizing the bias across the global workflow.
In yet a next embodiment of the invention, the bias reporting and visualization module (25) makes the bias findings accessible and actionable, where an optional dashboard displays visual reports. These include heatmaps that show where bias is most concentrated across steps, distribution of the types of bias identified, graphs highlighting which entities are over- or underrepresented, and source citation graph with region and bias tagging indicating the geopolitical or ideological leaning of sources used. This helps stakeholders quickly grasp patterns and trends.
In yet a next embodiment of the invention, the feedback and prompt rewriting module (26) provide actionable suggestions to reduce or eliminate detected bias. It recommends rephrasing prompts in a more neutral tone, using a broader or more balanced set of data sources, or reconfiguring the tools and models (e.g., switching to a different LLM or tool) thereby offering mitigating strategies that help in improving future interactions and creating more balanced agentic workflows.
In preferred embodiment of the invention, the method for detecting and scoring bias in an agentic workflow as illustrated in Fig.2. includes the steps as follows:
- initiating the bias analysis process or user input using at least one input unit (1),
- orchestrating the sequence of tasks based on the trigger (21) by the agent workflow instrumentation layer (21),
- logging each agent’s action and execution steps for transparency and traceability by the agent workflow instrumentation layer (21),
- running deterministic modules to detect measurable bias dimension by bias fingerprinting MCP module (22), thereby generating deterministic structured bias scores from MCP analysis,
- using an LLM to access nuanced and emergent biases parallelly by the reflective LLM bias agent (23), thereby producing LLM based bias score reflection,
- combining deterministic and LLM scores into a unified bias score by the bias score aggregator and composite scoring engine (24),
- making the bias findings and summaries accessible in the form of bias heatmap and reports by the bias reporting and visualization module (25),
- offering revised prompt recommendation to mitigate bias and recommending suggestions by the feedback and prompt rewriting module (26).
In yet another embodiment of the invention, the reflective LLM bias agent flow (23) as illustrated in fig. 4. includes the steps as follows:
- capturing the AI agent’s output and related context or metadata using agentic step and metadata,
- generating a tailored prompt using the bias reflection prompt to evaluate potential bias in the captured step,
- assessing the bias presence on the prompt using the reflective LLM,
- determining the LLM's response including identified bias type, severity score and explanation; wherein the bias type categorizes the kind of bias (e.g., political, gender, source-based), the severity score assigns a numerical or qualitative measure of the bias’s impact, and the explanation provides rationale behind the bias detection,
- providing a structured bias report compiling all bias insights into a comprehensive report format.
In yet another embodiment of the invention, the composite bias score aggregation (24) as illustrated in fig.5. functions using the steps as follows:
- assessing bias using the keyword bias score by analyzing the frequency and polarity of specific terms used,
- evaluating the diversity and credibility of information sources using the source bias score,
- detecting over-reliance on recent or outdated data affecting objectivity using the temporal bias score,
- measuring emotional tone to identify skewed sentiment patterns using the sentiment bias score,
- analyzing how different entities such as people or organizations are portrayed using tyje entity bias score,
- using a language model to evaluate nuanced or emergent biases by the LLM reflective score,
- combining all the individual bias scores using predefined weights by the weighted aggregator,
- providing an output as a composite bias score (0–1) reflecting overall bias intensity.
In yet a next embodiment of the invention, bias reporting and visualization module (25) as illustrated in fig. 6 shows the components as follows:
- Step-wise heatmap visualizes bias intensity across each step in the workflow.
- Bias category distribution displays the proportion of different bias types detected.
- Top affected entities that highlight the individuals or groups most impacted by bias.
- Source diversity graph shows the variety and balance of information sources used.
- Mitigation suggestions provide actionable recommendations to reduce detected bias.
In yet a next embodiment of the invention, the feedback and prompt rewriting module (26) as illustrated is configured to follow the steps as follows:
- identifying and quantifying bias in the agent’s responses or actions using the bias scores and reports,
- analyzing bias results and determines necessary modifications to the prompt or setup by the prompt rewrite module (26),
- enabling rephrase prompts to modify the wording of prompts to reduce or eliminate bias,
- enabling change tools used, to swap out or select different tools or models to achieve unbiased results,
- enabling adjust agent role, to modify the assigned role or perspective of the agent to ensure balanced outputs,
- re-running the workflow thereby executing the revised setup to test for improved and less biased performance.
According to yet another embodiment, the system and method offers significant advantages as it enables auditable bias tracing in otherwise opaque agentic workflows; offers a hybrid of deterministic and generative techniques; is adaptable to any LLM system with plugin/tool integration; facilitates regulatory and ethical compliance in AI systems; and supports continuous improvement through human-in-the-loop adjustments. The present system and method may be used by the financial agent advisors, legal research assistants, educational content generators, journalism co-pilots or recruitment and HR screening bots.
While considerable emphasis has been placed herein on the specific elements of the preferred embodiment, it will be appreciated that many alterations can be made and that many modifications can be made in preferred embodiment without departing from the principles of the invention. These and other changes in the preferred embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the invention and not as a limitation.
, Claims:CLAIMS:
1. A system and method for detecting and scoring bias in agentic workflows using hybrid determination and generative analysis;
wherein the system (10) comprises of an input unit (1), processing unit (2) further comprising of agent workflow instrumentation layer (21), bias fingerprinting model-compute-platform (MCP) module (22), large language model (LLM) based reflective bias evaluation (23), bias aggregation and composite scoring engine (24), bias reporting and visualization modular (25), feedback and prompt rewriting modular (26) and an output unit (3); such that all the structural and functional components work in co-ordination to employ a method for detecting and scoring bias in agentic workflows;
characterized in that:
the method for detecting and scoring bias in an agentic workflow includes the steps of;
- initiating the bias analysis process or user input using at least one input unit (1),
- orchestrating the sequence of tasks based on the trigger (21) by the agent workflow instrumentation layer (21),
- logging each agent’s action and execution steps for transparency and traceability by the agent workflow instrumentation layer (21),
- running deterministic modules to detect measurable bias dimension by bias fingerprinting MCP module (22), thereby generating deterministic structured bias scores from MCP analysis,
- using an LLM to access nuanced and emergent biases parallelly by the reflective LLM bias agent (23), thereby producing LLM based bias score reflection,
- combining deterministic and LLM scores into a unified bias score by the bias score aggregator and composite scoring engine (24),
- making the bias findings and summaries accessible in the form of bias heatmap and reports by the bias reporting and visualization module (25),
- offering revised prompt recommendation to mitigate bias and recommending suggestions by the feedback and prompt rewriting module (26).
2. The system and method as claimed in claim 1, wherein the agent workflow instrumentation layer (21) is configured to log each step in the agentic workflow with unique identifiers and metadata including, but not limited to prompt content and structure, response content, time executor, name and version of the model used, list of tools such as model-compute-platform (MCP) invoked and the memory context or prior messages.
3. The system as claimed in claim 1, wherein the bias analysis ledger generated by the agent workflow instrumentation layer (21) contains metadata for each step in the agentic workflow including prompt content and structure, model response content, timestamp of execution, agent/model identity, tools and APIs invoked via the MCP and memory context; such that it is configured to record regular activity log and supports bias detection across multiple dimensions in downstream modules like the bias fingerprinting MCP (22), which performs keyword, sentiment, source, and temporal bias checks, the reflective LLM module (23) which does introspective analysis.
4. The system and method as claimed in claim 1, wherein the bias fingerprinting model-compute-platform (MCP) (22) is central module that coordinates bias detection across multiple analytical dimensions including:
- keyword balance analyzer (22.1) that examines the use of polarizing loaded keywords, further enabling semantic spread score to measure the diversity and balance of language used;
- source diversity checker (22.2) that evaluates a variety and origin of cited sources, further enabling the source origin score to quantify how broadly sourced the information is;
- temporal anchoring detector (22.3) that identifies over-reliance on recent or time-specific data, further enabling the recent score to measure the temporal distribution of referenced content;
- sentiment tilt analyzer (22.4) that assesses emotions tone and its directional bias, further enabling polarity score to indicate the sentiment leaning either as positive, negative or neutral;
- entity mention frequency scanner (22.5) that tracks how often specific entities are mentioned, further enabling entity bias scores that detects the potential bias from disproportionate entity mentions.
5. The system and method as claimed in claim 1, wherein the reflective LLM bias agent flow (23) includes the steps of;
- capturing the AI agent’s output and related context or metadata using agentic step and metadata,
- generating a tailored prompt using the bias reflection prompt to evaluate potential bias in the captured step,
- assessing the bias presence on the prompt using the reflective LLM,
- determining the LLM's response including identified bias type, severity score and explanation;
- providing a structured bias report compiling all bias insights into a comprehensive report format.
6. The system and method as claimed in claim 1, wherein the bias type is categorized into the kind of bias including but not limited to political, gender, source-based; the severity score assigns a numerical or qualitative measure of the bias’s impact; and the explanation provides rationale behind the bias detection.
7. The system and method as claimed in claim 1, wherein the composite bias score aggregation (24) functions using the steps of;
- assessing bias using the keyword bias score by analyzing the frequency and polarity of specific terms used,
- evaluating the diversity and credibility of information sources using the source bias score,
- detecting over-reliance on recent or outdated data affecting objectivity using the temporal bias score,
- measuring emotional tone to identify skewed sentiment patterns using the sentiment bias score,
- analyzing how different entities such as people or organizations are portrayed using tyje entity bias score,
- using a language model to evaluate nuanced or emergent biases by the LLM reflective score,
- combining all the individual bias scores using predefined weights by the weighted aggregator,
- providing an output as a composite bias score (0–1) reflecting overall bias intensity.
8. The system and method as claimed in claim 1, wherein the bias reporting and visualization module (25) further includes;
- step-wise heatmap visualizes bias intensity across each step in the workflow,
- bias category distribution displays the proportion of different bias types detected,
- top affected entities that highlight the individuals or groups most impacted by bias,
- source diversity graph shows the variety and balance of information sources used,
- mitigation suggestions provide actionable recommendations to reduce detected bias.
9. The system and method as claimed in claim 1, wherein the feedback and prompt rewriting module (26) include the steps of;
- identifying and quantifying bias in the agent’s responses or actions using the bias scores and reports,
- analyzing bias results and determines necessary modifications to the prompt or setup by the prompt rewrite module (26),
- enabling rephrase prompts to modify the wording of prompts to reduce or eliminate bias,
- enabling change tools used, to swap out or select different tools or models to achieve unbiased results,
- enabling adjust agent role, to modify the assigned role or perspective of the agent to ensure balanced outputs,
- re-running the workflow thereby executing the revised setup to test for improved and less biased performance.
Dated this 17th day of July, 2025.
| # | Name | Date |
|---|---|---|
| 1 | 202521068252-STATEMENT OF UNDERTAKING (FORM 3) [17-07-2025(online)].pdf | 2025-07-17 |
| 2 | 202521068252-POWER OF AUTHORITY [17-07-2025(online)].pdf | 2025-07-17 |
| 3 | 202521068252-FORM 1 [17-07-2025(online)].pdf | 2025-07-17 |
| 4 | 202521068252-FIGURE OF ABSTRACT [17-07-2025(online)].pdf | 2025-07-17 |
| 5 | 202521068252-DRAWINGS [17-07-2025(online)].pdf | 2025-07-17 |
| 6 | 202521068252-DECLARATION OF INVENTORSHIP (FORM 5) [17-07-2025(online)].pdf | 2025-07-17 |
| 7 | 202521068252-COMPLETE SPECIFICATION [17-07-2025(online)].pdf | 2025-07-17 |
| 8 | Abstract.jpg | 2025-08-02 |
| 9 | 202521068252-FORM-9 [26-09-2025(online)].pdf | 2025-09-26 |
| 10 | 202521068252-FORM 18 [01-10-2025(online)].pdf | 2025-10-01 |