Abstract: A system (10) to predict, summarize and recommend risks in artificial intelligence responsible governance platform is disclosed. The core module includes the risk analysis module to analyze input prompts using a Neural network with attention based artificial intelligence model. The risk analysis module further includes a domain level prediction module to predict the nature, purpose, scope and context of the input prompts, a time context prediction module to predict the time related aspects, a sensitivity prediction module to recognize and filter the occurrence of sensitive information, a likelihood and severity prediction module to evaluate and grade risks and its probability, a risk prediction module to finalize representation of selected risks, customize and share a predicted risk recommendation context to a user. Further, an output module configured to visualize summary of the identified risk, transfer the summary, recommended contextual risks to a downstream risk mitigation module. FIG. 1
DESC:EARLIEST PRIORITY DATE:
This Application claims priority from a provisional patent application filed in India having Patent Application No. 202341026003, filed on April 06, 2023, and titled “SYSTEM AND METHOD FOR INPUT RISK ANALYSIS IN AN AI GOVERNANCE PLATFORM”.
FIELD OF INVENTION
[0001] Embodiment of the present disclosure relates to artificial intelligence governance platform and more particularly to, a system and a method to predict, summarize and recommend risks in an artificial intelligence responsible governance platform.
BACKGROUND
[0002] The evolution of Generative Pre-trained Transformer (GPT) has led to various solutions for addressing a collection of queries and concern. Further, GPT is widely used for several tasks such as text generation, summarization, translation, question-answering and so on. GPT exhibits a remarkable understanding and producing human-like text in various applications (for instance, natural language understanding, content generation and conversational Artificial Intelligence systems). Although, GPT offers significant benefits and capabilities, there are several potential privacy risks associated with its use, as users tend to share sensitive information during interactions.
[0003] Often, users share sensitive or personal identifiable information (PII) as prompts for the GPT models to generate text. This raises concerns about data storage, retrieval and potential exposure. Additionally, there is a risk that the GPT models might inadvertently memorize and later reproduce this information in the generated text. For instance, if a user provides a GPT model with a prompt containing personal details such as names, addresses, phone numbers, or medical information, there is a risk that the model might incorporate this information into the generated text. Further, there is a rise of prompt attacks where malicious input from the users aims to exploit the GPT model’s learned knowledge thereby posing a significant threat to user privacy.
[0004] Privacy concerns at the prompt level can lead to potential breaches of confidentiality, loss of privacy, and unauthorized disclosure of personal or sensitive information. To mitigate these risks, users should exercise caution when providing prompts to GPT models, especially when dealing with sensitive or confidential data. Additionally, developers and organizations deploying GPT models should implement robust data handling practices, including data anonymization techniques and access controls, to minimize the risk of privacy breaches. Therefore, there is a need to proactively mitigate privacy threats at the prompt level to ensure a secure and confidential user experience.
[0005] Generative AI or transformer based architectures have gained significant interest for their effectiveness in various natural language processing (NLP) tasks. But they are very poor in the aspects of understanding the holistic context and domain level risk and summarize it in relation to privacy, accountability, safety, security, fairness, explainability and reliability, making the need for Responsible AI and information exchange very acute.
[0006] Additionally, traditional language models, while powerful, often come with high computational costs and may not be optimized for specific tasks or domains.
[0007] Hence, there is a need for an improved system to predict, summarize and recommend in an artificial intelligence responsible governance platform to address the aforementioned issue(s).
OBJECTIVE OF THE INVENTION
[0008] An objective of the present invention is to provide a dedicated system that prioritizes user privacy by implementing robust risk assessment mechanisms, proactive prompt analysis and continuous model improvement.
[0009] Another objective of the present disclosure is to proactively mitigate privacy treats at the prompt level before feeding them to a GPT model, thereby providing a secure and trustworthy interaction environment for users.
BRIEF DESCRIPTION
[0010] In accordance with an embodiment of the present disclosure, a system to predict, summarize and recommend a plurality of contextual risks in a responsible artificial intelligence governance platform is provided. The system includes at least one processor in communication with a client processor. The system also includes at least one memory includes a set of program instructions in the form of a processing subsystem, configured to be executed by the at least one processor. The processing subsystem is hosted on a server and configured to execute on a network to control bidirectional communications among a plurality of modules. The processing subsystem includes an input module configured to receive one or more input prompts from a user or information flow anticipating interaction with an internal or external foundational artificial intelligence model. The processing subsystem includes a risk analysis module operatively coupled to the input module wherein the risk analysis module is configured to analyze the one or more input prompts using a neural network with attention based artificial intelligence model to predict, summarize and recommend a plurality of contextual risks. Further, the risk analysis module includes a domain level prediction module, a time context prediction module, a sensitivity prediction module, a likelihood and severity prediction module, a risk prediction module and a summarization module. The domain level prediction module is coupled to the risk analysis module, wherein the domain level prediction module is pre-trained, fine-tuned, architected and configured to predict the nature, purpose, scope and context of the said one or more input prompts. The time context prediction module coupled to the risk analysis module, wherein the time context prediction module is pre-trained, fine-tuned, architected and configured to predict the time related aspects pertaining to the one or more input prompts based on the grammatical, date related and contextual information. The sensitivity prediction module is coupled to the risk analysis module wherein the sensitivity prediction module is pre-trained, fine-tuned, architected and configured to recognize and filter the occurrence of a plurality of sensitive information in the one or more input prompts using Natural Language Processing. The likelihood and severity prediction module is coupled to the risk analysis module, wherein the likelihood and severity prediction module is pre-trained, fine-tuned, architected and configured to evaluate and grade a plurality of risks and its corresponding probability or likelihood of re-identification or risk realization in relation against the severity of the predicted risk. The risk prediction module is coupled to the risk analysis module, wherein the risk prediction module is pre-trained, fine-tuned, architected and configured to finalize representation of select risks from a plurality of risks identified from pervious modules including domain level risks, time related risks, sensitive informational risk and likelihood and severity of the risk. The summarization module is coupled to the risk prediction module, wherein the summarization module is pre-trained, fine-tuned, architected and configured to aggregate the finalized risks, customize it for user consumption and share a predicted risk recommendation context to a user. Further, the processing subsystem includes an output module coupled to the risk analysis module wherein the output module is configured to visualize the summary of the above identified risk via a dashboard user interface and transfer the summary of potential risks and recommended contextual risks to a downstream risk mitigation module.
[0011] In accordance with an embodiment of the present disclosure, a method for input risk identification in an artificial intelligence governance responsible platform is provided. The method includes receiving, by an input module, one or more input prompts from a user or information flow anticipating interaction with an internal or external foundational artificial intelligence model. The method includes analyzing, by a risk analysis module, the one or more input prompts by using a neural network with attention based artificial intelligence model to predict, summarize and recommend a plurality of contextual risks. The method includes predicting, by a domain level prediction module, the nature, purpose, scope and context of the said one or more input prompts. The method includes predicting, by a time context prediction module, the time related aspects pertaining to the one or more input prompts based on the grammatical, data related and contextual information. The method includes recognizing and filtering, by a sensitivity prediction module, the occurrence of a plurality of sensitive information in the one or more input prompts using Natural Language Processing. The method includes evaluating and grading, by a likelihood and severity prediction module, a plurality of risks and its corresponding probability or likelihood of re-identification or risk realization in relation against the severity of the predicted risk. The method includes finalizing, by a risk prediction module, representation of selected risks from the plurality of risks identified from previous modules including domain level risks, time related risks, sensitive informational risk and likelihood and severity of the risk. Further, the method includes aggregating, by a summarization module, the finalized risks, customizing it for user consumption and share a predicted risk recommendation context to a user. Furthermore, the method includes visualizing, by an output module, the summary of the identified risk via a dashboard user interface. Moreover, the method includes transferring, by an output module, the summary of potential risks and recommended contextual risks to a downstream risk mitigation module.
[0012] To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
[0014] FIG. 1 is a block diagram representation of a system to predict, summarize and recommend risks in an artificial intelligence responsible governance platform in accordance with an embodiment of the present disclosure;
[0015] FIG. 2 is a block diagram representation of an embodiment of the system to predict, summarize and recommend risks in an artificial intelligence responsible governance platform of FIG. 1 in accordance with an embodiment of the present disclosure;
[0016] FIG. 3 is a block diagram of a computer or a server in accordance with an embodiment of the present disclosure;
[0017] FIG. 4(a) illustrates a flow chart representing the steps involved in a method to predict, summarize and recommend in an artificial intelligence responsible governance platform in accordance with an embodiment of the present disclosure; and
[0018] FIG. 4(b) illustrates continued steps of method to predict, summarize and recommend in an artificial intelligence responsible governance platform of FIG. 4 (a) in accordance with an embodiment of the present disclosure.
[0019] Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[0020] For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
[0021] The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or subsystems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, elements, structures, components, additional devices, additional sub-systems, additional elements, additional structures, or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
[0022] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
[0023] As used herein, ‘risks’ refers to potential contextual risks or challenges associated with input prompts that is provided to an artificial intelligence responsible governance platform. Such risks can manifest in various forms, such as personal risk, financial risk, medical risk, cardinal risk, vehicle risk, consumable risk, language risk, education risk, event risk, organizational risk, security risk, legal risk, social risk, confidential risk, Not Safe for Work risk, weapon risk, technological risk, date-time risk, regulatory risk and location risk. The system and method disclosed herein aims to predict, summarize and recommend the said risks.
[0024] Embodiments of the present disclosure relates to system and method to predict, summarize and recommend risks in an artificial intelligence responsible governance platform. The system includes at least one processor in communication with a client processor. The system also includes at least one memory includes a set of program instructions in the form of a processing subsystem, configured to be executed by the at least one processor. The processing subsystem is hosted on a server and configured to execute on a network to control bidirectional communications among a plurality of modules. The processing subsystem includes an input module configured to receive one or more input prompts from a user or information flow anticipating interaction with an internal or external foundational artificial intelligence model. The processing subsystem includes a risk analysis module operatively coupled to the input module wherein the risk analysis module is configured to analyze the one or more input prompts using a neural network with attention based artificial intelligence model to predict, summarize and recommend a plurality of contextual risks. Further, the risk analysis module includes a domain level prediction module, a time context prediction module, a sensitivity prediction module, a likelihood and severity prediction module, a risk prediction module and a summarization module. The domain level prediction module is coupled to the risk analysis module, wherein the domain level prediction module is pre-trained, fine-tuned, architected and configured to predict the nature, purpose, scope and context of the said one or more input prompts. The time context prediction module is coupled to the risk analysis module, wherein the time context prediction module is pre-trained, fine-tuned, architected and configured to predict the time related aspects pertaining to the one or more input prompts based on the grammatical, date related and contextual information. The sensitivity prediction module is coupled to the risk analysis module wherein the sensitivity prediction module is pre-trained, fine-tuned, architected and configured to recognize and filter the occurrence of a plurality of sensitive information in the one or more input prompts using Natural Language Processing. The likelihood and severity prediction module is coupled to the risk analysis module, wherein the likelihood and severity prediction module is pre-trained, fine-tuned, architected and configured to evaluate and grade a plurality of risks and its corresponding probability or likelihood of re-identification or risk realization in relation against the severity of the predicted risk. The risk prediction module is coupled to the risk analysis module, wherein the risk prediction module is pre-trained, fine-tuned, architected and configured to finalize representation of select risks from a plurality of risks identified from pervious modules including domain level risks, time related risks, sensitive informational risk and likelihood and severity of the risk. The summarization module is coupled to the risk prediction module, wherein the summarization module is pre-trained, fine-tuned, architected and configured to aggregate the finalized risks, customize it for user consumption and share a predicted risk recommendation context to a user. Further, the processing subsystem includes an output module coupled to the risk analysis module wherein the output module is configured to visualize the summary of the above identified risk via a dashboard user interface and transfer the summary of potential risks and recommended contextual risks to a downstream risk mitigation module. The system and method to predict, summarize and recommend risks in an artificial intelligence responsible governance platform is further described in detail in the following figure descriptions.
[0025] FIG. 1 is a block diagram representation of a system to predict, summarize and recommend risks in an artificial intelligence responsible governance platform in accordance with an embodiment of the present disclosure. The system (10) includes at least one processor (20) in communication with a client processor (30). The processor (20) generally refers to a computational unit or central processing unit (CPU) responsible for executing instructions in a computer system. The phrase "in communication with a client processor" implies that there is a relationship or interaction between at least one processor and a specific type of processor referred to as a "client processor." Here, the term "client processor" refer to a processor that initiates requests or tasks and interacts with another processor (which may be a server processor) to fulfil those requests.
[0026] In one embodiment, the processor (20) may be the CPU or a Graphics Processing Unit (GPU) or a combination of both. Typically, Large Language Models are processor hungry, mandatorily requiring GPUs to be present. Specifically, an efficient neural network architecture with attention for domain level risk prediction, summarization and recommendation of classified risk is disclosed, which in an embodiment may be small language model, which can run on both CPU or a GPU or a hybrid processing architecture, adding high level of efficiency to the system (10).
[0027] The present invention discloses the neural network with attention modules tailored for control of prompts or information flow to predict, summarize and recommend potential risks at a holistic domain level in the shared data or information flow or prompts for responsible AI deployment and usage. The neural network with attention mechanism based AI system for domain level risk prediction, summarization and recommendation of classified risk will act like a key decision making component in an AI proxy or a firewall between users and the downstream generative AI models and data exchanges by governing the bidirectional information flow including prompts, responses and information exchange. Thus, providing safe, secure and responsible AI experience and information exchange to an ecosystem.
[0028] Further, the present disclosure includes usage of small language models (SLM) with pre-training and fine-tuning capabilities, empowering new possibilities for specialized applications in various fields at lesser costs. Small language model may have lesser number of parameters, lesser layers, smaller layers sizes, lesser attention heads and hence may have lesser compute requirement and hence more cost effective, sustainable and environment friendly for the task of domain level risk prediction, summarization and recommendation of classified risk in prompts, responses and information flow. The neural network with attention mechanism used in one embodiment may be a small language model, which is an efficient architecture compared to the popular large language model, where the steps involved may be selection of suitable small language model, pretraining on domain specific data, fine-tuning for specialized task and customization and optimization for specific purpose of domain level risk prediction, summarization and recommendation of classified risk in prompts, responses and information flow, enabling efficient and effective solutions tailored to unique requirements of domain level risk prediction, summarization and recommendation of classified risk in prompts, responses and information flow. Prior to fine-tuning for the domain level risk prediction, summarization and recommendation of classified risk in prompts, responses and information flow the selected small language model undergoes pre-training on domain-specific datasets of domain level risk prediction, summarization and recommendation of classified risk in prompts, responses and information flow. This pre-training phase enables the model to learn domain-specific features of domain level risk prediction, summarization and recommendation of classified risk in prompts, responses and information flow, vocabulary, and contextual understanding, enhancing its performance and adaptability to domain level risk prediction, summarization and recommendation of classified risk in prompts, responses and information flow. Following pre-training, the small language model is fine-tuned using datasets of various types with domain level risk prediction, summarization and recommendation of classified risk. Fine-tuning involves adjusting the model's parameters and updating its weights to optimize performance in understanding the holistic context and domain level risk and summarize it. By leveraging the knowledge gained during pre-training, the model can quickly adapt to the nuances of domain level risk prediction, summarization and recommendation of classified risk in prompts, responses and information flow and achieve superior performance with limited data. The disclosed system includes customization and optimization of the fine-tuning process for the selected small language model to suit the requirements of understanding the holistic context and domain level risk and summarize it. This may involve adjusting hyperparameters, selecting appropriate training strategies for various sub-modules with various multi-heads of attention, or incorporating domain-specific constraints of understanding the holistic context and domain level risk and summarize it to enhance the model's efficacy and efficiency in achieving the effective domain level risk prediction, summarization and recommendation of classified risk in prompts, responses and information flow.
[0029] The system (10) also includes at least one memory (40) includes a set of program instructions in the form of a processing subsystem (50), configured to be executed by the at least one processor. The processing subsystem (50) is hosted on a server (55) and configured to execute on a network (not shown in FIG. 1) to control bidirectional communications among a plurality of modules. As used herein, the memory (40) is a storage component within the system used for storing data and instructions that can be accessed by the processor. It executes a sequence of commands or directions written in a programming language that can be executed by a computer. In one embodiment, the server (55) may include a cloud server. In another embodiment, the server (55) may include a local server. The processing subsystem (50) is configured to execute on the network to control bidirectional communications among a plurality of modules. In one embodiment, the network may include a wired network such as local area network (LAN). In another embodiment, the network may include a wireless network such as Wi-Fi, Bluetooth, Zigbee, near field communication (NFC), infra-red communication (RFID) or the like.
[0001] The processing subsystem (50) includes an input module (60) configured to receive one or more input prompts from a user or information flow anticipating interaction with an internal or external foundational artificial intelligence model. Typically, the input module (60) receives the one or more input prompts form one or more users, frontend applications or through an application programming interface (API). In one embodiment, the one or more input prompts are received from the user via a prompt interface. The prompt interface is configured on a user device (not shown in FIG. 1) and is configured to display a field that receives the one or more user prompts from the user via a graphical user interface of the user device. Examples of the user device includes, but is not limited to, a personal computer (PC), a mobile phone, a tablet device, a personal digital assistant (PDA), a smart phone, a laptop, and pagers. Further, in one embodiment, the prompt interface may include a chatbot, a dashboard, a virtual world, or the like. Specifically, the one or more input prompts are requests or cues for information, queries, or actions that users or external entities provides to the system. The requests or cues may be in the form of text, speech, images, questions, commands, statements, code snippets or topics for discussion. The form of requests or cues typically depends on the capabilities of the AI model and the prompt interface. Further, the one or more user prompts are essential to guide the AI model to tailor responses in correspondence to the one or more user’s needs or interests. In other words, the one or more input prompts are those that anticipate interactions (a response or an action) with an underlying artificial intelligence (AI) model.
[0002] Moreover, the AI model refers to the internal or external foundational AI model. As used herein, the internal foundational AI model refers to the core framework in which the AI model is built. For instance, in the context of OpenAI’s language models like Generative Pre-trained Transformer (GPT), the model itself is considered as the internal AI model configured to understand and generate human-like text. Likewise, as used herein, the external foundational AI model refers to a pre-trained model that serves as a foundational building block for other AI models. For instance, OpenAI’s GPT models can be considered as external foundational AI models. Specifically, it must be noted that the external foundational AU models are pre-trained on large datasets and fine-tuned for various specific applications such as language translation, text summarization, question answering and the like. It must be noted that, the ‘pre-trained’ part indicates that the model is initially trained on a large corpus of text data before being fine-tuned for specific tasks. This pre-training allows the model to learn general patterns of language use and syntax, enabling it to understand and generate text in a wide variety of contexts.
[0003] Further, the one or more user prompts may also be received through information flow anticipating interaction with the internal or external foundational artificial intelligence model. This information flow refers to the exchange of data, commands or queries between the user and the AI model. It will be appreciated to those skilled in the art that the information flows between several layers or modules within the AI model.
[0030] The processing subsystem (50) includes a risk analysis module (70) operatively coupled to the input module (60), wherein the risk analysis module (70) is configured to analyze the one or more input prompts using a Neural network with attention based artificial intelligence model to predict, summarize and recommend a plurality of contextual risks. As used herein, the ‘risks’ refers to potential issues or challenges that can arise during the interaction between the one or more users and the AI model. The plurality of risks includes, but is not limited to the following:
1. Personal risk: refers to potential negative consequences that the one or more user encounters. The negative consequences can manifest in several ways for instance, the one or more users may inadvertently disclose sensitive personal information or data through their prompts leading to privacy breaches, personal identifiers or credentials that could lead to identity theft, sensitive or controversial topics can lead to responses that harms an individual’s reputation or public image if shared without consent, responses from the AI model that may be cause potential psychological harm to the one or more users. For example, email addresses, phone numbers, unique identifiers and names of individuals are considered as personal identifiable information (PII) due to their direct association with a user’s online presence. Further, the email address serves as a unique identifier for a specific user and can be linked to various online accounts, communication channels and digital activities. Specifically, consider an input prompt that reads as follows:
“I got an email confirming an offer letter for job at bobmark123@gmail.com” from a reputed company. I want to reject the offer politely, assist me.”
In such a scenario, the potential risk is the mention of the email id (bobmark123@gmail.com). Subsequently, this risk is tagged under the personal risk category.
2. Financial risk: refers to user prompts relate to financial transactions or investments that can lead to financial losses. For instance, the Card Verification Value (CVV) is a form of personally identifiable information (PII) associated with credit and debit cards. Additional examples include bank account number, card number, currency information and Unified Payments Interface (UPI). It adds an extra layer of security during online transactions. Therefore, protecting the CVV is crucial to prevent unauthorized access and financial risks. The system understands the credit card details like CVV, credit card number, expiry date and tags it under financial risk.
3. Medical risk: refers to user prompts that pertain to potential harm or adverse outcomes of health and well-being. The user prompts may include sensitive medical data that could pose privacy risks. In such a scenario, the one or more users can encounter misdiagnosis, inappropriate treatment and delay in seeking medical attention. Further, biological information, such as DNA or genetic data, is considered sensitive personally identifiable information (PII). Protecting this information is crucial to maintain privacy, prevent misuse, and mitigate potential risks related to health or identity. Additionally, chemical information is considered as sensitive especially in research or industrial contexts.
4. Cardinal risk: refers to the fundamental or critical risks that can harm to the one or more users and the AI model. Cardinal information, which may include significant numbers or values, can be sensitive in various contexts.
5. Vehicle risk: refers to potential hazards or safety concerns associated with interactions deployed in vehicles or autonomous driving technologies. Typically, the autonomous driving technologies refers to technologies that enable vehicles to navigate without the intervention of humans. Information related to vehicles, such as make, model, or transportation details is essential for various purposes, including transportation services, logistics, or general discussions about vehicles. The details about vehicles are crucial for making informed decisions about transportation, and they contribute to discussions on environmental impact, safety, and efficiency.
6. Consumable risk: refers to the nature of the user prompts that has the potential to expose the one or more users to negative consequences. For instance, information related to consumable products, such as food or beverages.
7. Language risk: refers to the use of language in the user prompts. At times, the language used can be unclear, complex, lack context, biased or contain stereotypes, contain offensive words, contain legal matters and the like. For instance, consider an input prompt “Is German Language an easy language to learn”. The use of “German Language” can lead to potential cultural or communication risks.
8. Education risk: refers to potential negative consequences or challenges that the one or more users may encounter while seeking for education content from the AI model. Educational information, including academic records and achievements, is generally not considered personally identifiable information (PII) in all contexts.
9. Event risk: refers to risks that arises from various factors related to the nature of the events. The events may include emergency situations, financial transactions, natural disasters, health and safety concerns and the like. Information related to events, such as event details, schedules, and participants, it can still be sensitive, especially when organizing or participating in significant events.
10. Organizational risk: refers to the risks that an organization may encounter as a result of the user prompts. These risk may arise from various factors pertaining to the organization’s culture and operations. Further, such user prompts may tarnish the organization’s reputation and credibility.
11. Security risk: refers to the risks that exposes the system to vulnerabilities or threats. This may compromise the system’s security, integrity and confidentiality. For instance, a user prompt that contains confidential information may be leaked to unauthorized parties. Further, IP addresses are considered as PII as they can be linked to specific devices and in turn individuals. Protecting IP addresses is crucial to maintain online privacy, prevent tracking, and mitigate potential risks associated with unauthorized access. Likewise, network addresses, credential information and website URL’s falls under this category.
12. Legal risk: refers to risks pertaining to potential legal liabilities, regulatory violations or compliance issues.
13. Social risk: refers to the risks that can impact individuals, groups, communities or society in terms of their social dynamics, relationships, norms and values. For example, cyberbullying. Ethnicity, race, faith, and religion are all considered personally identifiable information (PII), representing cultural, racial, and religious backgrounds. Safeguarding this data is essential to respect privacy, prevent discrimination, and minimize potential social risks associated with sharing such personal details.
14. Confidential risk: refers to the risks that exposes sensitive and confidential information thereby compromising the confidentiality, privacy or security of data. The data includes, but is not limited to, financial data, trade secrets, medical records and proprietary business information.
15. Not Safe For Work (NSFW) risk: refers to the risk that generates content that is inappropriate, offensive or unsuitable for viewing in public or for certain audiences.
16. Weapon risk: This type of risk is encountered when the user prompts may lead to generation of content related to the use of weapons and even production, sale, distribution of the weapons. Examples of the weapons includes, but is not limited to, firearms, explosives or other weapons.
17. Technological risk: refers to the risks pertaining to the technological components of the AI model. For example, user prompts may trigger technical glitches, errors and system failures.
18. Date-time risk: refers to the risks encountered as a result of time-sensitive data, temporal dependencies or historical trends in user prompts.
19. Location risk: refers to the risks encountered when user prompts contain location-based information, for instance, geographic coordinates, addresses or points of interest. For example, "Send me the package to my flat near the Eco Park" is a location-based request that facilitates the delivery of a package. While not PII, sharing accurate location details is essential for effective communication and service delivery. Additionally, information related to transit locations, such as highways or transportation hubs may be considered as risks. For instance, "The National Highway 17 has made inter-state travel a breeze" refers to a transit location and highlights the convenience of a specific highway for travel between states.
[0031] Therefore, it is essential to first analyze the one or more input prompts. There are several reasons that lead to the presence of the ‘risk’. Some of the reasons include poorly formed user prompts, involving sensitive or harmful topics leading to unsafe response from the AI model, presence of personal or sensitive information leading to privacy violations, adversarial inputs that can extract sensitive information from the system and so on.
[0032] The risk analysis module (70) further includes a domain level prediction module (80), a time context prediction module (90), a sensitivity prediction module (100), a likelihood and severity prediction module (110), a risk prediction module (120) and a summarization module (130).
[0033] The domain level prediction module (80) is coupled to the risk analysis module (70), wherein the domain level prediction module (80) is pre-trained, fine-tuned, architected and configured to predict the nature, purpose, scope and context of the said one or more input prompts. As used herein, the phrase “pre-trained, fine-tuned and architected” refers to various stages pertaining to the deployment and deployment of the AI model. ‘Pre-trained’ signifies that the AI model has been initially trained using a large set of training data to understand/ analyze patterns, features or representations on an input data. Typically, the dataset encompasses a diverse range of data. It must be noted that the pre-trained stage allows the AI model to be further customized for specific tasks. ‘Fine-tuned’ refers to the stage or process of further training the said pre-trained model based on task-specific or domain-specific dataset. This allows the AI model to leverage its understanding/ analysis that was performed during the pre-training stage to meet the tasks requirement. Fine-tuning also helps to improve the performance and accuracy of the pre-trained models. Similarly, ‘architected’ signifies that the AI model’s architecture has been engineered to meet the requirements of the task. The ‘architecture’ refers to the structure and connectivity of the layers pertaining to the AI model. Consequently, this helps to optimize performance and scalability for the task.
[0034] Th nature, purpose, scope and context of the one or more input prompts aids to understand the optimization for the AI model. Further, the nature, purpose, scope and context of the said one or more input prompts are predicted by dedicated modules that are pre-trained, fine-tuned and architected. The said modules are further discussed in FIG. 2.
[0035] In one embodiment, the domain level prediction module (80) is a part of a neural network based foundational module running on the server (55) to help predict the domain based on the nature, scope, context and purpose of the prompt. The neural network based foundational module is configured with neural network architectures and acts as the backbone of the AI model. As used herein, the neural network is a computational model inspired by the structure and function of the human brain, composed of interconnected nodes, or neurons that are organized into layers. Specifically, the neural network based foundational module is accountable to perform critical tasks such as data processing.
[0036] The time context prediction module (90) is coupled to the risk analysis module (70), wherein the time context prediction module is pre-trained, fine-tuned, architected and configured to predict the time related aspects pertaining to the one or more input prompts based on the grammatical, date related and contextual information.
[0037] In one embodiment, the time context prediction module (90) is pre-trained, fine-tuned, architected and configured to determine the time related aspects from grammatical tense, its impact on verbs, any time related attributes and other linguistic references related to time based on which different types of time related risks are identified and shared.
[0038] In another embodiment, the time context prediction module (90) is pre-trained, fine-tuned, architected and configured to determine the time related aspects from dates and contextual information. The dates may refer to occurrence of time-stamped observations or events captured over a period of time. Likewise, the contextual information considers a temporal context surrounding associated to the input. Examples of the context surrounding includes the time of the day, day of the week or recent historical events.
[0039] The sensitivity prediction module (100) coupled to the risk analysis module (70) wherein the sensitivity prediction module (100) is pre-trained, fine-tuned, architected and configured to recognize and filter the occurrence of a plurality of sensitive information in the one or more input prompts using Natural Language Processing. As discussed earlier, the sensitive information may include personally identifiable information (PII), financial data, health records, trade secrets and intellectual property. Further, PII may include names, addresses, social security numbers, credit card numbers or medical records.
[0040] In one embodiment, the sensitivity prediction module (100) is configured to generate a probabilistic risk score based on the nature of the one or more input prompts thereby ensuring that personal, financial, medical and confidential information are recognized as a potential risk. The probabilistic risk score is a numerical value assigned to the one or more input prompts to notify the likelihood of a specific risk. In one embodiment, the probabilistic risk score is expressed as a percentage or a decimal value between 0 and 1. Higher scores indicate a higher chance of risk occurrence whereas lower scores indicate a lower chance of the risk occurrence.
[0041] The likelihood and severity prediction module (110) coupled to the risk analysis module (70), wherein the likelihood and severity prediction module (110) is pre-trained, fine-tuned, architected and configured to evaluate and grade a plurality of risks and its corresponding probability or likelihood of re-identification or risk realization in relation against the severity of the predicted risk.
[0042] In one embodiment, the likelihood and severity prediction module (110) further comprises a severity analysis module (190) pre-trained, fine-tuned, architected and configured to determine the potential impact and severity of the plurality of risks to evaluate and grade a plurality of risks like Privacy, Bias, Confidentiality, user safety, sensitive information leak, domain level, contextual, time based risks and its corresponding probability or likelihood of risk realization in relation against the severity of the predicted risk. For instance, consider a healthcare environment, the severity is based on medical history of a patient, symptoms and treatment interventions.
[0043] The risk prediction module (120) coupled to the risk analysis module (70), wherein the risk prediction module (120) is pre-trained, fine-tuned, architected and configured to finalize representation of select risks from a plurality of risks identified from pervious modules including domain level risks, time related risks, sensitive informational risk and likelihood and severity of the risk.
[0044] In one embodiment, the risk analysis module (70) is configured with Multi-Head attention modules using task specific heads designed for transfer learning and domain specific tasks with fine-tuning to provide the identified potential risks as a training data to refine the neural network constantly to identify potential risks. The Multi-Head attention modules are typically used in Natural Language Processing (NLP) to learn the structural similarity condition of the input.
[0045] The summarization module (130) coupled to the risk prediction module (120), wherein the summarization module (130) is pre-trained, fine-tuned, architected and configured to aggregate the finalized risks, customize it for user consumption and share a predicted risk recommendation context to a user.
[0046] In one embodiment, the summarization module (130) is pre-trained, fine-tuned, architected and configured to aggregate the finalized risks recommendation in the form of tags with positional annotations and probabilistic scoring, which can be shared with different kinds of downstream services. As used herein, the downstream services refers to applications or systems that use the output generated by an upstream components (for instance, machine learning models or data processing models). This output is used by the downstream service and is finally presented to the end-user. Further, the output may include transformed data, model predictions, recommendations or insights obtained from processing performed by the upstream components. Typically, the downstream services is essential for AI-driven insights.
[0047] Further, the processing subsystem (50) includes an output module (140) coupled to the risk analysis module (70) wherein the output module (140) is configured to visualize the summary of the above identified risk via a dashboard user interface and transfer the summary of potential risks and recommended contextual risks to a downstream risk mitigation module. Typically, the dashboard user interface is a graphical user interface (GUI) that is commonly used in software applications, websites and information systems to provide users with a comprehensive overview of the data and insights. Dashboard user interface includes various types of visualizations for instance, charts, graphs, tables, maps, widgets and the like. It also supports streaming capabilities.
[0048] In one embodiment, the output module (140) is configured to visualize the summary of the above identified risk via a dashboard user interface with various suitable or configurable colour coding for different types of risk summarization, heat maps for probabilistic recommendation and JavaScript Object Notation (JSON) based integration with downstream services like User Interface (UI) systems or Language Model Operations (LLMOps) or Machine Learning Operations (MLOps) or DevOps pipeline. The colour coding is a powerful tool that is used to enhance the effectiveness of the dashboard interfaces. Different colors are used to convey information.
[0049] FIG. 2 is block diagram representation of an embodiment of the system to predict, summarize and recommend risks in an artificial intelligence responsible governance platform of FIG. 1 in accordance with an embodiment of the present disclosure. The domain level prediction module (80) further comprises a nature analysis module (150), a scope analysis module (170), a context analysis module (160) and a purpose analysis module (180).
[0050] The nature analysis module (150) is coupled to domain level prediction module (80), where nature analysis module (150) is pre-trained, fine-tuned, architected and configured to determine the characteristics and inherent qualities of the input prompts and the plurality of risks associated with it based on the identified characteristics and qualities. This includes the format of the input prompts, the complexity of the user prompts, the tone used in the user prompts and any metadata (such as timestamps) in the user prompts. The same is applicable to the plurality of risks.
[0051] The scope analysis module (160) is coupled to the domain level prediction module (80) is pre-trained, fine-tuned, architected and configured to understand scope limitation requirements of the input, determine the potential scope violation and consequence associated with the plurality of risks. The scope defines the range or extent of the information contained in the input. In other words, it defines the depth of the information or details addressed in the input. The same is applicable to the plurality of risks.
[0052] The context analysis module (170) is coupled to the domain level prediction module (80) wherein the context analysis module (170) is pre-trained, fine-tuned, architected configured to determine the specific circumstances and scenario and of usage of the input and plurality of risks related to using the information outside the approved context. The context refers to the situation, environment and social factors that influences the user while interacting with the AI model. For instance, the context (time, location, user device used and so on) in which the user prompt was made, the user’s background and any external stimuli or cues that may affect the input.
[0053] The purpose analysis module (180) is coupled to the domain level prediction module (80) wherein the purpose analysis module (180) is pre-trained, fine-tuned, architected and configured to identify potential intended usage and potential for misuse associated with the input and related plurality of risks. In other words, this refers to the purpose of the input for the AI model, the desired response and any potential constraints that may influence the input.
[0054] Further, the likelihood and severity prediction module (110) further comprises a severity analysis module (190) that is pre-trained, fine-tuned and architected. The severity analysis module (190) is configured to determine the potential impact and severity of the plurality of risks to evaluate and grade a plurality of risks like Privacy, Bias, Confidentiality, user safety, sensitive information leak, domain level, contextual, time based risks and its corresponding probability or likelihood of risk realization in relation against the severity of the predicted risk.
[0055] Consider a real-time scenario where an artificial intelligence governance platform is deployed in a dynamic business environment. The platform incorporates a sophisticated system for analyzing input prompts to predict, summarize and recommend a plurality of contextual risks in the artificial intelligence responsible governance platform. The business environment testifies several interactions of their employees with an internal or external foundational artificial intelligence model. The interactions are user prompts/ input prompts that are susceptible to a plurality of contextual risks. The system aims at analyzing the potential contextual risks before the input prompts are fed into the internal or external foundational artificial intelligence model. The input module (60) receives the input prompts from a user or information flow anticipating interaction with the internal or external foundational artificial intelligence model. Subsequently, the input prompts are analyzed in multiple ways by the risk analysis module (70). Primarily, the nature, purpose, scope and context of the input prompts are predicted by a a domain level prediction module (80). Additionally, the time context prediction module (90) predicts the time related aspects pertaining to the one or more input prompts based on the grammatical, date related and contextual information. The sensitivity prediction module (100) recognizes and filters the occurrence of a plurality of sensitive information in the input prompts using Natural Language Processing. Further, the likelihood and severity prediction module (110) is accountable to evaluate and grade a plurality of risks and its corresponding probability or likelihood of re-identification or risk realization in relation against the severity of the predicted risk. At this time, the input prompts are analyzed and subsequently the risk prediction module (120) finalizes a representation of selected risks from a plurality of risks identified from pervious modules including domain level risks, time related risks, sensitive informational risk and likelihood and severity of the risk. The summarization module (130) aggregates the finalized risks, customizes it for user consumption and shares a predicted risk recommendation context with the user. The user can visualize the summary of the above identified risk via a dashboard user interface by the output module (140). In this embodiment, the real-time risk analysis system serves as a guardian, proactively predicting, summarizing and recommending potential risks within the dynamic realm of artificial intelligence, providing the business environment with a comprehensive and adaptive governance solution.
[0056] FIG. 3 is a block diagram of a computer or a server in accordance with an embodiment of the present disclosure. The server (300) includes processor(s) (330), and memory (310) operatively coupled to the bus (320). The processor(s) (330), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
[0057] The memory (310) includes several subsystems stored in the form of executable program which instructs the processor (330) to perform the method steps illustrated in FIG. 1. The memory (310) includes a processing subsystem (50) of FIG. 1. The processing subsystem (50) further has following modules: an input module (60), a risk analysis module (70) and an output module (140). Further, the risk analysis module (70) includes a domain level prediction module (80), a time context prediction module (90), a sensitivity prediction module (100), a likelihood and severity prediction module (110), a risk prediction module (120) and a summary module (130).
[0058] The input module (60) is configured to receive one or more input prompts from a user or information flow anticipating interaction with an internal or external foundational artificial intelligence model. The risk analysis module (70) is operatively coupled to the input module (60) wherein the risk analysis module (70) is configured to analyze the one or more input prompts using a Neural network with attention based artificial intelligence model to predict, summarize and recommend a plurality of contextual risks. Further, the risk analysis module (70) includes the domain level prediction module (80), the time context prediction module (90), the sensitivity prediction module (100), the likelihood and severity prediction module (110), the risk prediction module (120) and the summarization module (130). The domain level prediction module (80) is coupled to the risk analysis module (70), wherein the domain level prediction module (80) is pre-trained, fine-tuned, architected and configured to predict the nature, purpose, scope and context of the said one or more input prompts. The time context prediction module (90) is coupled to the risk analysis module (70), wherein the time context prediction module (90) is pre-trained, fine-tuned, architected and configured to predict the time related aspects pertaining to the one or more input prompts based on the grammatical, date related and contextual information. The sensitivity prediction module (100) is coupled to the risk analysis module (70) wherein the sensitivity prediction module is pre-trained, fine-tuned, architected and configured to recognize and filter the occurrence of a plurality of sensitive information in the one or more input prompts using Natural Language Processing. The likelihood and severity prediction module (110) is coupled to the risk analysis module (70), wherein the likelihood and severity prediction module (110) is pre-trained, fine-tuned, architected and configured to evaluate and grade a plurality of risks and its corresponding probability or likelihood of re-identification or risk realization in relation against the severity of the predicted risk. The risk prediction module (120) is coupled to the risk analysis module (70), wherein the risk prediction module (120) is pre-trained, fine-tuned, architected and configured to finalize representation of select risks from a plurality of risks identified from pervious modules including domain level risks, time related risks, sensitive informational risk and likelihood and severity of the risk. The summarization module (130) is coupled to the risk prediction module (120), wherein the summarization module (130) is pre-trained, fine-tuned, architected and configured to aggregate the finalized risks, customize it for user consumption and share a predicted risk recommendation context to a user. Further, the output module (140) is coupled to the risk analysis module (70) wherein the output module (140) is configured to visualize the summary of the above identified risk via a dashboard user interface and transfer the summary of potential risks and recommended contextual risks to a downstream risk mitigation module.
[0059] The bus (220) as used herein refers to be internal memory channels or computer network that is used to connect computer components and transfer data between them. The bus (220) includes a serial bus or a parallel bus, wherein the serial bus transmits data in bit-serial format and the parallel bus transmits data across multiple wires. The bus (220) as used herein, may include but not limited to, a system bus, an internal bus, an external bus, an expansion bus, a frontside bus, a backside bus and the like.
[0060] FIG. 4(a) illustrates a flow chart representing the steps involved in a method to predict, summarize and recommend in an artificial intelligence responsible governance platform in accordance with an embodiment of the present disclosure. FIG. 4(b) illustrates continued steps of method to predict, summarize and recommend in an artificial intelligence responsible governance platform of FIG. 4 (a) in accordance with an embodiment of the present disclosure. The method (400) includes receiving, by an input module, one or more input prompts from a user or information flow anticipating interaction with an internal or external foundational artificial intelligence model in step (405).
[0061] The method (400) includes analyzing, by a risk analysis module, the one or more input prompts using a Neural network with attention based artificial intelligence model to predict, summarize and recommend a plurality of contextual risks in step (410). the plurality of risks comprises personal risk, financial risk, medical risk, cardinal risk, vehicle risk, consumable risk, language risk, education risk, event risk, organizational risk, security risk, legal risk, social risk, confidential risk, Not Safe For Work risk, weapon risk, technological risk, date-time risk and location risk.
[0062] The identified potential risks are provided as training data to refine the neural network constantly to identify potential risks.
[0063] The method (400) includes predicting, by a domain level prediction module, the nature, purpose, scope and context of the said one or more input prompts in step (415).
[0064] In one embodiment, the method (400) includes determining, by a nature analysis module, the characteristics and inherent qualities of the input and the plurality of risks associated with it based on the identified characteristics and qualities.
[0065] In one embodiment, the method (400) includes understanding, by a scope analysis module, scope limitation requirements of the input, determine the potential scope violation and consequence associated with the plurality of risks.
[0066] In one embodiment, the method (400) includes determining, by a context analysis module, the specific circumstances and scenario and of usage of the input and plurality of risks related to using the information outside the approved context.
[0067] In one embodiment, the method (400) includes identifying, by a purpose analysis module, potential intended usage and potential for misuse associated with the input and related plurality of risks.
[0068] The method (400) includes predicting, by a time context prediction module, the time related aspects pertaining to the one or more input prompts based on the grammatical, data related and contextual information in step (420).
[0069] In one embodiment, the method (400) includes determining the time related aspects from grammatical tense, its impact on verbs, any time related attributes and other linguistic references related to time based on which different types of time related risks are identified and shared.
[0070] The method (400) includes recognizing and filtering, by a sensitivity prediction module, the occurrence of a plurality of sensitive information in the one or more input prompts using Natural Language Processing in step (425).
[0071] In one embodiment, the method (400) includes generating a probabilistic risk score based on the nature of the one or more input prompts thereby ensuring that personal, financial, medical and confidential information are recognized as a potential risk.
[0072] The method (400) includes evaluating and grading, by a likelihood and severity prediction module, a plurality of risks and its corresponding probability or likelihood of re-identification or risk realization in relation against the severity of the predicted risk in step (430).
[0073] Further, in one embodiment, the method (400) includes determining the potential impact and severity of the plurality of risks to evaluate and grade a plurality of risks like Privacy, Bias, Confidentiality, user safety, sensitive information leak, domain level, contextual, time based risks and its corresponding probability or likelihood of risk realization in relation against the severity of the predicted risk.
[0074] The method (400) includes finalizing, by a risk prediction module, representation of selected risks from the plurality of risks identified from previous modules including domain level risks, time related risks, sensitive informational risk and likelihood and severity of the risk in step (435).
[0075] The method (400) includes aggregating, by a summarization module, the finalized risks, customizing it for user consumption and share a predicted risk recommendation context to a user in step (440).
[0076] In one embodiment, the method (400) includes aggregating the finalized risks recommendation in the form of tags with positional annotations and probabilistic scoring, which can be shared with different kinds of downstream services.
[0077] The method (400) includes visualizing, by an output module, the summary of the identified risk via a dashboard user interface in step (445).
[0078] In one embodiment, the method (400) includes visualizing the summary of the above identified risk via a dashboard user interface with various suitable or configurable colour coding for different types of risk summarization, heat maps for probabilistic recommendation and JSON based integration with downstream services like UI systems or LLMOps or MLOps or DevOps pipeline.
[0079] The method (400) includes transferring, by an output module, the summary of potential risks and recommended contextual risks to a downstream risk mitigation module in step (450).
[0080] Various embodiments of the present disclosure provides a system to predict, summarize and recommend a plurality of contextual risks in a responsible artificial intelligence governance platform. The system facilitates seamless and user-friendly interaction by receiving input prompts, ensuring a smooth flow of information into the system. The system identifies and redacts sensitive information within input prompts before they reach the GPT model. This enhances privacy safeguards in real time. Additionally, natural language processing techniques are utilized to recognize and filter out personal details ensuring that they are not stored or remembered. The system also assesses the potential risk level associated with the input prompts. Assigning risk scores based on the nature of the input ensures higher scrutiny for input prompts containing personal, financial and confidential information. The system adheres to privacy-by design principles to ensure that the system is robust, accurate and respectful of user confidentiality. The system also ensures prompt action by communication the potential risks in input prompts to downstream services thereby facilitating a timely response to identify potential risks. The combination of these features in the real-time risk analysis system provides a holistic and adaptive approach to governance, ensuring organizations can effectively manage and mitigate a diverse range of risks associated with artificial intelligence interactions.
[0081] The system can also be collaborated with cybersecurity experts to stay abreast of emerging treats.
[0082] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the disclosure and are not intended to be restrictive thereof.
[0083] While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
[0084] The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, the order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.
,CLAIMS:1. A system (10) to predict, summarize and recommend a plurality of contextual risks in a responsible artificial intelligence governance platform comprising:
at least one processor (20) in communication with a client processor (30); and
at least one memory (40) comprises a set of program instructions in the form of a processing subsystem (50), configured to be executed by the at least one processor, wherein the processing subsystem (50) is hosted on a server (55) and configured to execute on a network to control bidirectional communications among a plurality of modules comprising:
an input module (60) configured to receive one or more input prompts from a user or information flow anticipating interaction with an internal or external foundational artificial intelligence model;
a risk analysis module (70) operatively coupled to the input module (60), wherein the risk analysis module (70) is configured to analyze the one or more input prompts using a Neural network with attention based artificial intelligence model to predict, summarize and recommend a plurality of contextual risks, wherein the risk analysis module (70) comprises a plurality of submodules comprising:
a domain level prediction module (80) coupled to the risk analysis module (70), wherein the domain level prediction module (80) is pre-trained, fine-tuned, architected and configured to predict the nature, purpose, scope and context of the said one or more input prompts;
a time context prediction module (90) coupled to the risk analysis module (70), wherein the time context prediction module is pre-trained, fine-tuned, architected and configured to predict the time related aspects pertaining to the one or more input prompts based on the grammatical, date related and contextual information;
a sensitivity prediction module (100) coupled to the risk analysis module (70) wherein the sensitivity prediction module (100) is pre-trained, fine-tuned, architected and configured to recognize and filter the occurrence of a plurality of sensitive information in the one or more input prompts using Natural Language Processing;
a likelihood and severity prediction module (110) coupled to the risk analysis module (70), wherein the likelihood and severity prediction module (110) is pre-trained, fine-tuned, architected and configured to evaluate and grade a plurality of risks and its corresponding probability or likelihood of re-identification or risk realization in relation against the severity of the predicted risk;
a risk prediction module (120) coupled to the risk analysis module (70), wherein the risk prediction module (120) is pre-trained, fine-tuned, architected and configured to finalize representation of select risks from a plurality of risks identified from pervious modules including domain level risks, time related risks, sensitive informational risk and likelihood and severity of the risk;
a summarization module (130) coupled to the risk prediction module (120), wherein the summarization module (130) is pre-trained, fine-tuned, architected and configured to aggregate the finalized risks, customize it for user consumption and share a predicted risk recommendation context to a user;
an output module (140) coupled to the risk analysis module (70) wherein the output module (140) is configured to visualize the summary of the above identified risk via a dashboard user interface and transfer the summary of potential risks and recommended contextual risks to a downstream risk mitigation module.
2. The system (10) as claimed in claim 1, wherein the domain level prediction module (80) further comprises:
a nature analysis module (150), coupled to the domain level prediction module (80), where in the nature analysis module (150) is pre-trained, fine-tuned, architected and configured to determine the characteristics and inherent qualities of the input and the plurality of risks associated with it based on the identified characteristics and qualities;
a scope analysis module (160) coupled to the domain level prediction module (80), wherein the scope analysis module (160) is pre-trained, fine-tuned, architected and configured to understand scope limitation requirements of the input, determine the potential scope violation and consequence associated with the plurality of risks;
a context analysis module (170) coupled to the domain level prediction module (80), wherein the context analysis module (170) is pre-trained, fine-tuned, architected configured to determine the specific circumstances and scenario and of usage of the input and plurality of risks related to using the information outside the approved context; and
a purpose analysis module (180) coupled to the domain level prediction module (80) wherein the purpose analysis module (180) is pre-trained, fine-tuned, architected and configured to identify potential intended usage and potential for misuse associated with the input and related plurality of risks.
3. The system (10) as claimed in claim 2, wherein the domain level prediction module (80) is a part of a neural network based foundational module running on the server (55) to help predict the domain based on the nature, scope, context and purpose of the prompt.
4. The system (10) as claimed in claim 1, wherein the time context prediction module (90) is pre-trained, fine-tuned, architected and configured to determine the time related aspects from grammatical tense, its impact on verbs, any time related attributes and other linguistic references related to time based on which different types of time related risks are identified and shared.
5. The system (10) as claimed in claim 1, wherein the likelihood and severity prediction module (110) further comprises a severity analysis module (190) pre-trained, fine-tuned, architected and configured to determine the potential impact and severity of the plurality of risks to evaluate and grade a plurality of risks like Privacy, Bias, Confidentiality, user safety, sensitive information leak, domain level, contextual, time based risks and its corresponding probability or likelihood of risk realization in relation against the severity of the predicted risk.
6. The system (10) as claimed in claim 1, wherein the risk analysis module (70) is configured with Multi-Head attention modules using task specific heads designed for transfer learning and domain specific tasks with fine-tuning to provide the identified potential risks as a training data to refine the neural network constantly to identify potential risks.
7. The system (10) as claimed in claim 1, wherein the sensitivity prediction module (100) is configured to generate a probabilistic risk score based on the nature of the one or more input prompts thereby ensuring that personal, financial, medical and confidential information are recognized as a potential risk.
8. The system (10) as claimed in claim 1, wherein the summarization module (130) is pre-trained, fine-tuned, architected and configured to aggregate the finalized risks recommendation in the form of tags with positional annotations and probabilistic scoring, which can be shared with different kinds of downstream services.
9. The system (10) as claimed in claim 1, wherein the output module (140) is configured to visualize the summary of the above identified risk via a dashboard user interface with various suitable or configurable colour coding for different types of risk summarization, heat maps for probabilistic recommendation and JSON based integration with downstream services like UI systems or LLMOps or MLOps or DevOps pipeline.
10. The system (10) as claimed in claim 1, wherein the neural network with attention mechanism is a small language model with attention for domain level risk prediction, summarization and recommendation of classified risk, wherein a suitable small language model is selected, pretrained on domain specific data, fine-tuned for specialized task and customization and optimization for specific purpose of understanding the holistic context and domain level risk and summarizing , enabling efficient and effective solutions tailored to unique requirements of domain level risk prediction, summarization and recommendation of classified risk in prompts, responses and information flow.
11. The system (10) as claimed in claim 1, wherein the plurality of risks comprises personal risk, financial risk, medical risk, cardinal risk, vehicle risk, consumable risk, language risk, education risk, event risk, organizational risk, security risk, legal risk, social risk, confidential risk, Not Safe For Work risk, weapon risk, technological risk, date-time risk and location risk.
12. A computer-implemented method (400) to predict, summarize and recommend a plurality of contextual risks in a responsible artificial intelligence governance platform comprising:
receiving, by an input module, one or more input prompts from a user or information flow anticipating interaction with an internal or external foundational artificial intelligence model; (405)
analyzing, by a risk analysis module, the one or more input prompts using a Neural network with attention based artificial intelligence model to predict, summarize and recommend a plurality of contextual risks; (410)
predicting, by a domain level prediction module, the nature, purpose, scope and context of the said one or more input prompts; (415)
predicting, by a time context prediction module, the time related aspects pertaining to the one or more input prompts based on the grammatical, data related and contextual information; (420)
recognizing and filtering, by a sensitivity prediction module, the occurrence of a plurality of sensitive information in the one or more input prompts using Natural Language Processing; (425)
evaluating and grading, by a likelihood and severity prediction module, a plurality of risks and its corresponding probability or likelihood of re-identification or risk realization in relation against the severity of the predicted risk; (430)
finalizing, by a risk prediction module, representation of selected risks from the plurality of risks identified from previous modules including domain level risks, time related risks, sensitive informational risk and likelihood and severity of the risk; (435)
aggregating, by a summarization module, the finalized risks, customizing it for user consumption and share a predicted risk recommendation context to a user; (440)
visualizing, by an output module, the summary of the identified risk via a dashboard user interface;(445) and
transferring, by an output module, the summary of potential risks and recommended contextual risks to a downstream risk mitigation module. (450)
Dated this 05th day of April, 2024
Signature
Jinsu Abraham
Patent Agent (IN/PA-3267)
Agent for the Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202341026003-STATEMENT OF UNDERTAKING (FORM 3) [06-04-2023(online)].pdf | 2023-04-06 |
| 2 | 202341026003-PROVISIONAL SPECIFICATION [06-04-2023(online)].pdf | 2023-04-06 |
| 3 | 202341026003-PROOF OF RIGHT [06-04-2023(online)].pdf | 2023-04-06 |
| 4 | 202341026003-POWER OF AUTHORITY [06-04-2023(online)].pdf | 2023-04-06 |
| 5 | 202341026003-FORM FOR STARTUP [06-04-2023(online)].pdf | 2023-04-06 |
| 6 | 202341026003-FORM FOR SMALL ENTITY(FORM-28) [06-04-2023(online)].pdf | 2023-04-06 |
| 7 | 202341026003-FORM 1 [06-04-2023(online)].pdf | 2023-04-06 |
| 8 | 202341026003-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [06-04-2023(online)].pdf | 2023-04-06 |
| 9 | 202341026003-EVIDENCE FOR REGISTRATION UNDER SSI [06-04-2023(online)].pdf | 2023-04-06 |
| 10 | 202341026003-FORM-26 [24-08-2023(online)].pdf | 2023-08-24 |
| 11 | 202341026003-DRAWING [05-04-2024(online)].pdf | 2024-04-05 |
| 12 | 202341026003-CORRESPONDENCE-OTHERS [05-04-2024(online)].pdf | 2024-04-05 |
| 13 | 202341026003-COMPLETE SPECIFICATION [05-04-2024(online)].pdf | 2024-04-05 |
| 14 | 202341026003-Power of Attorney [15-04-2024(online)].pdf | 2024-04-15 |
| 15 | 202341026003-FORM28 [15-04-2024(online)].pdf | 2024-04-15 |
| 16 | 202341026003-FORM-9 [15-04-2024(online)].pdf | 2024-04-15 |
| 17 | 202341026003-Covering Letter [15-04-2024(online)].pdf | 2024-04-15 |
| 18 | 202341026003-STARTUP [19-04-2024(online)].pdf | 2024-04-19 |
| 19 | 202341026003-FORM28 [19-04-2024(online)].pdf | 2024-04-19 |
| 20 | 202341026003-FORM 18A [19-04-2024(online)].pdf | 2024-04-19 |
| 21 | 202341026003-FER.pdf | 2024-05-29 |
| 22 | 202341026003-FORM 3 [11-07-2024(online)].pdf | 2024-07-11 |
| 23 | 202341026003-OTHERS [26-11-2024(online)].pdf | 2024-11-26 |
| 24 | 202341026003-FORM-26 [26-11-2024(online)].pdf | 2024-11-26 |
| 25 | 202341026003-FORM 3 [26-11-2024(online)].pdf | 2024-11-26 |
| 26 | 202341026003-FER_SER_REPLY [26-11-2024(online)].pdf | 2024-11-26 |
| 27 | 202341026003-COMPLETE SPECIFICATION [26-11-2024(online)].pdf | 2024-11-26 |
| 28 | 202341026003-CLAIMS [26-11-2024(online)].pdf | 2024-11-26 |
| 29 | 202341026003-ABSTRACT [26-11-2024(online)].pdf | 2024-11-26 |
| 1 | SearchStrategy202341026003E_27-05-2024.pdf |