Abstract: Disclosed is a system for reducing bias in text, comprising: an input interface configured to receive generated text to be analyzed for biases; a preprocessing module operatively connected to said input interface, said preprocessing module configured to preprocess said received text; an embedding generator communicatively coupled to said preprocessing module, said embedding generator configured to transform the preprocessed text into BERT embeddings; a bias detection processor including a bidirectional encoder representation from transformers (BERT) model for bias detection, wherein said bias detection processor is configured to receive said BERT embeddings and to identify potential biases within the embeddings; a qualification unit configured to categorize the identified biases using said bias detection processor; a strategy determination unit configured to determine strategies for reducing the identified biases using said bias detection processor; and a mitigation unit configured to apply the determined strategies to the text to produce output text with reduced biases, wherein said mitigation unit is integrated within said bias detection processor, and an output interface configured to return the output text with the reduced biases. Fig. 1 Drawings / FIG. 1 / FIG. 2 / FIG. 3
Description:Field of the Invention
The present disclosure generally relates to text analysis systems. Particularly, the present disclosure relates to a system for reducing bias in text.
Background
The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
The field of artificial intelligence and machine learning has witnessed substantial advancements, particularly in the development of models capable of generating human-like text. These models are integral to a variety of applications, from automated writing assistants to chatbots. While these technological advances offer significant benefits, they also pose challenges related to the biases inherent in the training data. Such biases can manifest as gender, racial, or cultural prejudices, adversely affecting fairness, inclusivity, and social equality.
Traditional methods for identifying and addressing biases in texts generated by AI have predominantly involved manual interventions or basic automated rules. These approaches not only demand considerable effort but also tend to overlook the subtle and intricate nature of bias. Existing methodologies are often insufficient for addressing deeply embedded biases within the complex architectures of contemporary neural networks, particularly those used in natural language processing (NLP) systems.
In response to these shortcomings, the development of advanced techniques for bias detection in AI-generated texts has become crucial. Such techniques involve the utilization of sophisticated NLP tools, which facilitate a deeper analysis of text. The Bidirectional Encoder Representations from Transformers (BERT) embeddings, for example, offer a refined understanding of language nuances, enabling the detection of a broader array of biases more effectively than previously possible.
Upon identifying biases, it is imperative to apply specific mitigation strategies tailored to the type of bias uncovered. These strategies are designed to be dynamic, allowing for adjustments based on the particular biases present, thereby enhancing the fairness and inclusivity of the text output. The adaptability of these strategies ensures their applicability across diverse contexts and languages, making the method highly versatile.
In light of the above discussion, there exists an urgent need for solutions that overcome the problems associated with conventional systems and/or techniques for detecting and mitigating bias in AI-generated texts.
Summary
The following presents a simplified summary of various aspects of this disclosure in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements nor delineate the scope of such aspects. Its purpose is to present some concepts of this disclosure in a simplified form as a prelude to the more detailed description that is presented later.
The following paragraphs provide additional support for the claims of the subject application.
In an aspect, the present disclosure provides a system for reducing bias in text. The system comprises an input interface configured to receive generated text for analysis of biases. A preprocessing module, operatively connected to the input interface, is configured to preprocess the received text. An embedding generator, communicatively coupled to the preprocessing module, is configured to transform the preprocessed text into BERT embeddings. A bias detection processor, including a bidirectional encoder representation from transformers (BERT) model for bias detection, is configured to receive the BERT embeddings and identify potential biases within the embeddings. A qualification unit is configured to categorize the identified biases using the bias detection processor. A strategy determination unit is configured to determine strategies for reducing the identified biases using the bias detection processor. A mitigation unit, configured to apply the determined strategies to the text, produces output text with reduced biases. This mitigation unit is integrated within the bias detection processor. An output interface is configured to return the output text with reduced biases.
In an embodiment, the preprocessing module comprises a conditional analysis component configured to determine whether tokenization, conversion to lowercase, and removal of punctuation from the received text is necessary before generating the BERT embeddings.
In an embodiment, the embedding generator is further configured to generate BERT embeddings only if the conditional analysis component determines that preprocessing the text is necessary.
In an embodiment, the bias detection processor is configured to perform a conditional check to proceed with bias qualification only when potential biases are detected within the BERT embeddings.
In an embodiment, the system further comprises a bias quantification module within the bias detection processor, the bias quantification module configured to quantify the level of bias detected in the BERT embeddings.
In an embodiment, the strategy determination unit is further configured to develop multiple mitigation strategies based on the category and quantification of the identified biases.
In an embodiment, the mitigation unit comprises a neutralization component configured to rephrase biased phrases detected in the text.
In an embodiment, the mitigation unit also includes a rephrasing validation component configured to validate the neutrality of the rephrased text.
In an embodiment, the output interface is further configured to provide the output text with reduced biases to a user for review before finalizing the output.
In an embodiment, the input interface is further configured to receive feedback from a user regarding the effectiveness of the bias mitigation in the output text, and to use the feedback to adjust the operations of the preprocessing module, the embedding generator, and the bias detection processor.
Brief Description of the Drawings
The features and advantages of the present disclosure would be more clearly understood from the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates a system for reducing bias in text, in accordance with the embodiments of the present disclosure.
FIG. 2 illustrates a sequence diagram delineating the operational workflow of a system for reducing bias in text, in accordance with embodiments of the present disclosure.
FIG. 3 illustrates a flow diagram for reducing bias in text, in accordance with the embodiments of the present disclosure.
Detailed Description
In the following detailed description of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to claim those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof.
The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
Pursuant to the "Detailed Description" section herein, whenever an element is explicitly associated with a specific numeral for the first time, such association shall be deemed consistent and applicable throughout the entirety of the "Detailed Description" section, unless otherwise expressly stated or contradicted by the context.
FIG. 1 illustrates a system (100) for reducing bias in text, in accordance with the embodiments of the present disclosure. The term "system (100)" as used throughout the present disclosure relates to an architecture designed to reduce biases in text. The system (100) comprises various components each configured to perform specific functions in the process of detecting and mitigating bias in text data.
The term "input interface (102)" as used throughout the present disclosure refers to a component of the system (100) configured to receive text that is to be analyzed for biases. The input interface (102) serves as the entry point for data into the system, ensuring that generated text is collected efficiently for subsequent analysis. In operation, the input interface (102) receives text data, which could be from various sources such as documents, web pages, or input fields in applications, thereby facilitating the initial step of bias detection and reduction workflow.
Connected operatively to the input interface (102) is the "preprocessing module (104)," a component designed to prepare the received text for further analysis. The preprocessing module (104) performs several functions such as cleaning the text, removing noise, and standardizing the format to ensure that the text is in a suitable state for embedding generation. The preprocessing of text is crucial as it enhances the accuracy and efficiency of the embedding generator (106) in producing meaningful text representations.
The "embedding generator (106)" is a critical component of the system (100) configured to transform the preprocessed text into BERT embeddings. BERT (Bidirectional Encoder Representations from Transformers) embeddings are advanced representations that capture contextual relationships between words in text. The embedding generator (106) utilizes these embeddings to create a rich, context-aware representation of the text, which is essential for effective bias detection. This transformation is performed through sophisticated algorithms that analyze the text contextually rather than in isolation, thereby generating embeddings that reflect the deeper semantic meanings of the text.
The "bias detection processor (108)" incorporates a BERT model specifically adapted for bias detection. This processor is configured to receive BERT embeddings from the embedding generator (106) and analyze these embeddings to identify potential biases present in the text. The use of a BERT model in the bias detection processor (108) enables the system to understand and interpret the contextual nuances of the text, which are indicative of biases. The detection of biases is achieved through the analysis of patterns and discrepancies in the embeddings that correlate with biased sentiments or expressions.
Subsequent to the detection of biases, the "qualification unit (110)" categorizes the identified biases using data from the bias detection processor (108). The qualification unit (110) classifies biases into various types, such as gender bias, racial bias, or age bias, based on predefined criteria and categories. This categorization is vital as it helps in understanding the nature of the biases and in formulating appropriate strategies for their mitigation.
The "strategy determination unit (112)" is configured to determine strategies for reducing the identified biases. This unit utilizes the categorized biases from the qualification unit (110) and develops tailored strategies aimed at neutralizing these biases. The strategies may include altering certain word choices, rephrasing sentences, or applying more comprehensive linguistic models that promote neutrality. The determination of effective strategies is fundamental to the bias mitigation process, ensuring that the output text reflects an unbiased perspective.
Finally, the "mitigation unit (114)" is integrated within the bias detection processor (108) and is configured to apply the determined strategies to the text. The mitigation unit (114) processes the text using the strategies devised by the strategy determination unit (112) to adjust the text in a way that reduces identified biases. This adjusted text is then prepared for output, embodying the reduced bias as intended.
An "output interface" is also part of the system (100), configured to return the output text with the reduced biases to the user or downstream applications. The output interface facilitates the distribution of the processed text, ensuring that it is accessible for use in various external systems or for further analysis.
Optionally, the system (100) may include additional modules for enhanced analytics and reporting, providing users with insights into the types and frequencies of biases detected and the effectiveness of the mitigation strategies employed. Working examples of the system in operation could involve analyzing corporate communications to ensure neutrality in external messaging or reviewing legal documents to detect and mitigate unintended biases that could affect interpretation and compliance.
In an embodiment, the preprocessing module (104) of the system (100) further comprises a conditional analysis component. Said component is configured to determine whether tokenization, conversion to lowercase, and removal of punctuation from the received text are necessary before generating BERT embeddings. The conditional analysis component evaluates the text to decide the preprocessing steps required based on the nature and format of the input text. If the text is determined to require preprocessing, the necessary steps are executed to ensure that the text is in an optimal format for effective embedding generation. The inclusion of said conditional analysis component enhances the flexibility and adaptability of the preprocessing module (104), allowing it to tailor preprocessing operations to the specific requirements of the text, thereby improving the overall efficiency and accuracy of the embedding generation process.
In an embodiment, the embedding generator (106) of the system (100) is further configured to generate BERT embeddings only if the conditional analysis component determines that preprocessing the text is necessary. This configuration allows the embedding generator (106) to operate more efficiently by ensuring that only suitably preprocessed text is subjected to the computationally intensive process of generating BERT embeddings. By linking the operation of the embedding generator (106) to the findings of the conditional analysis component, unnecessary processing of text that does not require transformation into BERT embeddings is avoided, thus optimizing the system's resource utilization and processing time.
In an embodiment, the bias detection processor (108) of the system (100) is configured to perform a conditional check to proceed with bias qualification only when biases are detected within the BERT embeddings. This conditional operation ensures that the system (100) focuses its resources on text instances where biases have been identified, thereby enhancing efficiency. The conditional check acts as a filter, allowing the bias detection processor (108) to prioritize processing power and analytical attention on text segments with likely biases, streamlining the overall bias identification and mitigation process.
In an embodiment, the bias detection processor (108) within the system (100) includes a bias quantification module. Said bias quantification module is configured to quantify the level of bias detected in the BERT embeddings. By quantifying the bias levels, the system (100) can apply a measured response appropriate to the severity and type of bias identified. The quantification of biases provides a metric-based approach to bias management, enabling the system to assess the effectiveness of the applied mitigation strategies by comparing the levels of bias before and after processing, thereby enhancing the precision of the bias mitigation process.
In an embodiment, the strategy determination unit (112) of the system (100) is further configured to develop multiple mitigation strategies based on the category and quantification of the identified biases. This configuration enables the creation of tailored strategies that are specifically designed to address the unique characteristics of each type of bias identified. By developing multiple strategies, the strategy determination unit (112) offers a versatile approach to bias mitigation, enabling the application of the most effective strategy based on the context and severity of the bias, thus enhancing the adaptability and effectiveness of the bias reduction efforts.
In an embodiment, the mitigation unit (114) of the system (100) further comprises a neutralization component configured to rephrase biased phrases detected in the text. Said neutralization component operates to adjust biased expressions into neutral language, actively reducing the presence of bias in the output text. The integration of such a neutralization component within the mitigation unit (114) is vital for direct intervention in the text, ensuring that the final output is free from biases that could affect the message's fairness and objectivity.
In an embodiment, the mitigation unit (114) of the system (100) also includes a rephrasing validation component. Said rephrasing validation component is configured to validate the neutrality of the rephrased text. This component ensures that the adjustments made by the neutralization component effectively eliminate biases without introducing new biases or distorting the original meaning of the text. The validation process is crucial for maintaining the integrity and accuracy of the output text, ensuring that the mitigation efforts result in genuinely unbiased and accurate communication.
In an embodiment, the output interface of the system (100) is further configured to provide the output text with reduced biases to a user for review before finalizing the output. By incorporating a review step, the output interface ensures that the user can verify the effectiveness of the bias mitigation process and make any necessary adjustments before the text is finalized. This step enhances user control over the final product, ensuring satisfaction with the output and the opportunity to tailor the final text to specific needs or preferences.
In an embodiment, the input interface (102) of the system (100) is further configured to receive feedback from a user regarding the effectiveness of the bias mitigation in the output text and to use said feedback to adjust the operations of the preprocessing module (104), the embedding generator (106), and the bias detection processor (108). This configuration enables a feedback loop where user insights contribute directly to refining the system's performance. By incorporating user feedback into the operational adjustments, the system (100) continually evolves and improves, enhancing its ability to effectively reduce biases in text.
FIG. 2 illustrates a sequence diagram delineating the operational workflow of a system for reducing bias in text, in accordance with embodiments of the present disclosure. The process commences with the 'Start' block, leading to 'Preprocessing', where an initial decision is made about the necessity of preprocessing the text. If preprocessing is deemed necessary, tasks such as tokenization, conversion to lowercase, and removal of punctuation are performed. The processed text is then transformed into BERT embeddings, capturing the contextual information essential for bias detection. The subsequent step involves the 'Bias Detection' block, where biases within the embeddings are identified. If biases are detected, the process advances to the 'Bias Detected Category' block, which classifies the identified biases, and then to the 'Bias Quantification' block, quantifying the level of detected biases. Based on the quantification and categorization, 'Mitigation Strategies' are determined, followed by the 'Bias Mitigation' block, where actual mitigation steps are applied. The system incorporates a 'Neutralization' component and a subsequent 'Rephrasing Validation' step to ensure that the rephrased text achieves the desired neutrality. If no biases are detected, or once the mitigation steps are applied and validated, the text is channeled towards the 'Output text' block, producing the final text with reduced biases ready for use. The diagram concludes with a 'Stop' block, signifying the end of the bias reduction process. The flowchart provides a clear and systematic approach to reducing biases in text, emphasizing the importance of each step in ensuring the neutrality of the output text, and highlighting the sequential dependency of the stages involved in achieving the end goal of bias mitigation.
FIG. 3 illustrates a flow diagram for reducing bias in text, in accordance with the embodiments of the present disclosure. The process initiates with an input phase where the generated text intended for bias analysis is received. This text is subjected to preprocessing to normalize and prepare it for further processing. The preprocessed text is then converted into BERT embeddings, which serve as input for the bias detection phase. In this phase, a finely tuned BERT model for bias detection scrutinizes the embeddings to ascertain the presence of biases. Following detection, the biases are qualified in the bias qualification step, categorizing them into specific types for a more targeted approach. With the biases identified and categorized, a mitigation strategy is formulated, outlining the steps necessary to neutralize the biases present in the text. The bias mitigation phase implements these strategies, actively amending and adjusting the text to diminish the biases. The process culminates with the output phase, where the text, now with reduced biases, is returned. This flow diagram encapsulates a comprehensive bias reduction protocol, illustrating the sequential steps from receiving text input to delivering unbiased text output, thus emphasizing the systematic and methodical approach of the disclosed system.
Example embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatuses. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including hardware, software, firmware, and a combination thereof. For example, in one embodiment, each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations can be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
Throughout the present disclosure, the term ‘processing means’ or ‘microprocessor’ or ‘processor’ or ‘processors’ includes, but is not limited to, a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
The term “non-transitory storage device” or “storage” or “memory,” as used herein relates to a random access memory, read only memory and variants thereof, in which a computer can store data or software for any duration.
Operations in accordance with a variety of aspects of the disclosure is described above would not have to be performed in the precise order described. Rather, various steps can be handled in reverse order or simultaneously or not at all.
While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
Claims
I/We claims:
A system (100) for reducing bias in text, comprising:
an input interface (102) configured to receive generated text to be analyzed for biases;
a preprocessing module (104) operatively connected to said input interface, said preprocessing module configured to preprocess said received text;
an embedding generator (106) communicatively coupled to said preprocessing module (104), said embedding generator (106) configured to transform the preprocessed text into BERT embeddings;
a bias detection processor (108) including a bidirectional encoder representation from transformers (BERT) model for bias detection, wherein said bias detection processor (108) is configured to receive said BERT embeddings and to identify potential biases within the embeddings;
a qualification unit (110) configured to categorize the identified biases using said bias detection processor (108);
a strategy determination unit (112) configured to determine strategies for reducing the identified biases using said bias detection processor (108); and
a mitigation unit (114) configured to apply the determined strategies to the text to produce output text with reduced biases, wherein said mitigation unit (114) is integrated within said bias detection processor (108), and an output interface configured to return the output text with the reduced biases.
The system (100) of claim 1, wherein said preprocessing module (104) further comprises a conditional analysis component configured to determine whether tokenization, conversion to lowercase, and removal of punctuation from said received text is necessary before generating said BERT embeddings.
The system (100) of claim 1, wherein said embedding generator (106) is further configured to generate BERT embeddings only if said conditional analysis component determines that preprocessing said text is necessary.
The system (100) of claim 1, wherein said bias detection processor (108) is configured to perform a conditional check to proceed with bias qualification only when potential biases are detected within said BERT embeddings.
The system (100) of claim 1, further comprising a bias quantification module within said bias detection processor (108), the bias quantification module configured to quantify the level of bias detected in said BERT embeddings.
The system (100) of claim 1, wherein said strategy determination unit (112) is further configured to develop multiple mitigation strategies based on the category and quantification of the identified biases.
The system (100) of claim 1, wherein said mitigation unit (114) further comprises a neutralization component configured to rephrase biased phrases detected in said text.
The system (100) of claim 7, wherein said mitigation unit (114) also includes a rephrasing validation component configured to validate the neutrality of the rephrased text.
The system (100) of claim 1, wherein said output interface is further configured to provide the output text with reduced biases to a user for review before finalizing the output.
The system (100) of claim 1, wherein said input interface (102) is further configured to receive feedback from a user regarding the effectiveness of the bias mitigation in the output text, and to use said feedback to adjust the operations of said preprocessing module (104), said embedding generator (106), and said bias detection processor (108).
METHOD FOR BIAS DETECTION AND MITIGATION THEROF IN AI GENERATED TEXTS
Disclosed is a system for reducing bias in text, comprising: an input interface configured to receive generated text to be analyzed for biases; a preprocessing module operatively connected to said input interface, said preprocessing module configured to preprocess said received text; an embedding generator communicatively coupled to said preprocessing module, said embedding generator configured to transform the preprocessed text into BERT embeddings; a bias detection processor including a bidirectional encoder representation from transformers (BERT) model for bias detection, wherein said bias detection processor is configured to receive said BERT embeddings and to identify potential biases within the embeddings; a qualification unit configured to categorize the identified biases using said bias detection processor; a strategy determination unit configured to determine strategies for reducing the identified biases using said bias detection processor; and a mitigation unit configured to apply the determined strategies to the text to produce output text with reduced biases, wherein said mitigation unit is integrated within said bias detection processor, and an output interface configured to return the output text with the reduced biases.
Fig. 1
Drawings
/
FIG. 1
/
FIG. 2
/
FIG. 3
, Claims:I/We claims:
A system (100) for reducing bias in text, comprising:
an input interface (102) configured to receive generated text to be analyzed for biases;
a preprocessing module (104) operatively connected to said input interface, said preprocessing module configured to preprocess said received text;
an embedding generator (106) communicatively coupled to said preprocessing module (104), said embedding generator (106) configured to transform the preprocessed text into BERT embeddings;
a bias detection processor (108) including a bidirectional encoder representation from transformers (BERT) model for bias detection, wherein said bias detection processor (108) is configured to receive said BERT embeddings and to identify potential biases within the embeddings;
a qualification unit (110) configured to categorize the identified biases using said bias detection processor (108);
a strategy determination unit (112) configured to determine strategies for reducing the identified biases using said bias detection processor (108); and
a mitigation unit (114) configured to apply the determined strategies to the text to produce output text with reduced biases, wherein said mitigation unit (114) is integrated within said bias detection processor (108), and an output interface configured to return the output text with the reduced biases.
The system (100) of claim 1, wherein said preprocessing module (104) further comprises a conditional analysis component configured to determine whether tokenization, conversion to lowercase, and removal of punctuation from said received text is necessary before generating said BERT embeddings.
The system (100) of claim 1, wherein said embedding generator (106) is further configured to generate BERT embeddings only if said conditional analysis component determines that preprocessing said text is necessary.
The system (100) of claim 1, wherein said bias detection processor (108) is configured to perform a conditional check to proceed with bias qualification only when potential biases are detected within said BERT embeddings.
The system (100) of claim 1, further comprising a bias quantification module within said bias detection processor (108), the bias quantification module configured to quantify the level of bias detected in said BERT embeddings.
The system (100) of claim 1, wherein said strategy determination unit (112) is further configured to develop multiple mitigation strategies based on the category and quantification of the identified biases.
The system (100) of claim 1, wherein said mitigation unit (114) further comprises a neutralization component configured to rephrase biased phrases detected in said text.
The system (100) of claim 7, wherein said mitigation unit (114) also includes a rephrasing validation component configured to validate the neutrality of the rephrased text.
The system (100) of claim 1, wherein said output interface is further configured to provide the output text with reduced biases to a user for review before finalizing the output.
The system (100) of claim 1, wherein said input interface (102) is further configured to receive feedback from a user regarding the effectiveness of the bias mitigation in the output text, and to use said feedback to adjust the operations of said preprocessing module (104), said embedding generator (106), and said bias detection processor (108).
METHOD FOR BIAS DETECTION AND MITIGATION THEROF IN AI GENERATED TEXTS
| # | Name | Date |
|---|---|---|
| 1 | 202421033098-OTHERS [26-04-2024(online)].pdf | 2024-04-26 |
| 2 | 202421033098-FORM FOR SMALL ENTITY(FORM-28) [26-04-2024(online)].pdf | 2024-04-26 |
| 3 | 202421033098-FORM 1 [26-04-2024(online)].pdf | 2024-04-26 |
| 4 | 202421033098-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-04-2024(online)].pdf | 2024-04-26 |
| 5 | 202421033098-EDUCATIONAL INSTITUTION(S) [26-04-2024(online)].pdf | 2024-04-26 |
| 6 | 202421033098-DRAWINGS [26-04-2024(online)].pdf | 2024-04-26 |
| 7 | 202421033098-DECLARATION OF INVENTORSHIP (FORM 5) [26-04-2024(online)].pdf | 2024-04-26 |
| 8 | 202421033098-COMPLETE SPECIFICATION [26-04-2024(online)].pdf | 2024-04-26 |
| 9 | 202421033098-FORM-9 [07-05-2024(online)].pdf | 2024-05-07 |
| 10 | 202421033098-FORM 18 [08-05-2024(online)].pdf | 2024-05-08 |
| 11 | 202421033098-FORM-26 [12-05-2024(online)].pdf | 2024-05-12 |
| 12 | 202421033098-FORM 3 [13-06-2024(online)].pdf | 2024-06-13 |
| 13 | 202421033098-RELEVANT DOCUMENTS [09-10-2024(online)].pdf | 2024-10-09 |
| 14 | 202421033098-POA [09-10-2024(online)].pdf | 2024-10-09 |
| 15 | 202421033098-FORM 13 [09-10-2024(online)].pdf | 2024-10-09 |