Abstract: A system (10) for a parental control in a generative artificial intelligence governance platform is disclosed. The core modules include the parental requirement capture module to allow a parent to specify prerequisite conditions, harm identification, regulatory requirements, acceptable automated risk mitigation and alert services. A context based query assessment module to understand the context of a prompt, compare it against the configured parental requirement and quantify risk by interacting with subsequent questions. The query treatment module to identify risks associated with the prompt. Further, the query treatment module is configured to mitigate the risks from the child perspective based on a context of the intended action and parental requirement defined. The child prompt engineering module is configured to monitor the query, review the response and modify or block the response based on the policy defined. A parental alert module to alert the parent of the risk thereby ensuring parental control. FIG. 1
DESC:EARLIEST PRIORITY DATE:
This Application claims priority from a provisional patent application filed in India having Patent Application No. 202341026577, filed on April 10, 2023, and titled “SYSTEM AND METHOD FOR PARENTAL CONTROL FOR GENERATIVE AI GOVERNANCE”.
FIELD OF INVENTION
[0001] Embodiment of the present disclosure relates to artificial intelligence governance platform and more particularly to, a system and a method for parental control in a generative artificial intelligence governance platform.
BACKGROUND
[0002] A generative artificial intelligence (GenAI) governance platform is designed to manage the development, deployment and use of generative artificial intelligence techniques. In other words, a framework is created to ensure these models are used responsibly and ethically while maximizing their potential benefits and minimizing risks. Typically, generative artificial intelligence is capable of generating new content like text, images, audio and video that did not exist before.
[0003] In an era where artificial intelligence (AI) innovations permeate almost every facet of our lives, the concept of governance has never been more critical. Specifically, the governance becomes more significant when children use the generative artificial intelligence systems for several purposes such as, education, leaning, entertainment, social communication, creative thinking, project work and the like. Children are extensively using GenAI platforms and are vulnerable to manipulation. Like seen in every technology, there are several potential risks that arise with children using GenAI. The impact of these risks ranges from potential threats to privacy and mental health. Using parental controls and adjusting privacy settings can help children from inappropriate content and potential data breaches. However, such measures are not effective. Further, the non-availability of parental control for children is a major concern.
[0001] Generative AI or transformer based architectures have gained significant interest for their effectiveness in various natural language processing (NLP) tasks. But they are very poor in the aspects of privacy, accountability, safety, security, fairness, explainability and reliability, especially in the aspect of harms which may accrue to children while using Gen AI making the need for Responsible AI and information exchange very acute. Further, traditional language models, while powerful, often come with high computational costs and may not be optimized for specific tasks or domains.
[0002] Hence, there is a need for an improved system for parental control in a generative artificial intelligence governance platform to address the aforementioned issue(s).
OBJECTIVE OF THE INVENTION
[0003] An objective of the present invention is to provide a dedicated system that provides parental control in a generative artificial intelligence governance platform by prerequisite conditions, automated child harm related risk identification, corresponding regulatory requirements and acceptable automated risk mitigation and alert services.
[0004] Another objective of the present disclosure is to provide confidence to parents to allow children to use GenAI platforms with automated supervision with minimal human oversight.
BRIEF DESCRIPTION
[0005] In accordance with an embodiment of the present disclosure, a system for parental control in a generative artificial intelligence governance platform is provided. The system includes at least one processor in communication with a client processor. The system also includes at least one memory includes a set of program instructions in the form of a processing subsystem, configured to be executed by the at least one processor. The processing subsystem is hosted on a server and configured to execute on a network to control bidirectional communications among a plurality of modules. The processing subsystem includes a registration module configured to receive credential information from a parent to register at least one child in the generative artificial intelligence governance platform. The processing subsystem includes a parental consent module operatively coupled to the registration module wherein the parental consent module is configured to receive authentication information based on the registration of the at least one child for an intended action. The processing subsystem includes a parental requirement capture module operatively coupled to the parental consent module wherein the parental requirement capture module is configured to allow the parent to specify a plurality of prerequisite conditions, automated child harm related risk identification for the intended use, corresponding regulatory requirements and acceptable automated risk mitigation and alert services wherein the plurality of prerequisite conditions are subjected to modification based on the age of the child over time. The processing subsystem includes a context based query assessment module operatively coupled to the parental requirement capture module wherein the context based query assessment module is a Neural Network with attention based AI module pre-trained, fine-tuned and configured to understand the context of a prompt, compare it against the configured parental requirement and quantify risk by interacting with one or more subsequent questions. The processing subsystem includes a query treatment module operatively coupled to the context based query assessment module wherein the query treatment module is a Neural Network with attention based AI module pre-trained, fine-tuned and configured to identify one or more risks associated with the prompt. Further, the query treatment module is configured to mitigate the one or more risks from the at least one child perspective based on a context of the intended action and parental requirement defined. Furthermore, the processing subsystem also includes a child prompt engineering module operatively coupled to the query treatment module wherein the child prompt engineering module is a Neural Network with attention based AI module pre-trained, fine-tuned and configured to monitor the query and review the response from the downstream AI module for any predefined risk or risks defined by parental requirement, modify or block the response based on the policy defined. Moreover, the processing subsystem includes a parental alert module operatively coupled to the child prompt engineering module wherein the parental alert module is configured to alert the parent in the presence of the risk thereby ensuring parental control in the generative artificial intelligence governance platform.
[0006] In accordance with an embodiment of the present disclosure, a method for parental control in a generative artificial intelligence governance platform is provided. The method includes receiving, by a registration module of a processing subsystem, credential information from a parent to register at least one child in the generative artificial intelligence governance platform. The method includes receiving, by a parental consent module, authentication information based on the registration of the at least one child for an intended action. The method includes allowing, by a parental requirement capture module, the parent to specify a plurality of prerequisite conditions and automated child harm related risk identification for the intended use, corresponding regulatory requirements and acceptable automated risk mitigation and alert services wherein the plurality of prerequisite conditions are subjected to modification based on the age of the child over time. The method includes understanding, by a context based query assessment module, the context of a prompt, comparing the context against the configured parental requirement and quantifying a risk by interacting with one or more subsequent questions. The method includes identifying, by a query treatment module, presence of one or more risks associated with the prompt. The method includes mitigating, by the query treatment module, the one or more risks from the child perspective based on a context of the intended action and parental requirement defined. The method includes monitoring, by a child prompt engineering module, the query and review the response from the downstream AI module for any predefined risk or risks defined by parental requirement, modify or block the response based on the policy defined. The method includes alerting, by a parental alert module, the parent in the presence of the risk thereby ensuring parental control in the generative artificial intelligence governance platform.
[0007] To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
[0009] FIG. 1 is a block diagram representation of a system for parental control in a generative artificial intelligence governance platform in accordance with an embodiment of the present disclosure;
[0010] FIG. 2 is a block diagram representation of an embodiment of the system for parental control in a generative artificial intelligence governance platform of FIG. 1 in accordance with an embodiment of the present disclosure;
[0011] FIG. 3 is a block diagram of a computer or a server in accordance with an embodiment of the present disclosure;
[0012] FIG. 4(a) illustrates a flow chart representing the steps involved in a method for parental control in a generative artificial intelligence governance platform in accordance with an embodiment of the present disclosure; and
[0013] FIG. 4(b) illustrates continued steps of method for parental control in a generative artificial intelligence governance platform of FIG. 4(a) in accordance with an embodiment of the present disclosure.
[0014] Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[0015] For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated computer-implemented system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
[0016] The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or subsystems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, elements, structures, components, additional devices, additional sub-systems, additional elements, additional structures, or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
[0017] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
[0018] Embodiments of the present disclosure relates to system and method for parental control in a generative artificial intelligence governance platform. The system includes at least one processor in communication with a client processor. The system also includes at least one memory includes a set of program instructions in the form of a processing subsystem, configured to be executed by the at least one processor. The processing subsystem is hosted on a server and configured to execute on a network to control bidirectional communications among a plurality of modules. The processing subsystem includes a registration module configured to receive credential information from a parent to register at least one child in the generative artificial intelligence governance platform. The processing subsystem includes a parental consent module operatively coupled to the registration module wherein the parental consent module is configured to receive authentication information based on the registration of the at least one child for an intended action. The processing subsystem includes a parental requirement capture module operatively coupled to the parental consent module wherein the parental requirement capture module is configured to allow the parent to specify a plurality of prerequisite conditions, automated child harm related risk identification for the intended use, corresponding regulatory requirements and acceptable automated risk mitigation and alert services wherein the plurality of prerequisite conditions are subjected to modification based on the age of the child over time. The processing subsystem includes a context based query assessment module operatively coupled to the parental requirement capture module wherein the context based query assessment module is a Neural Network with attention based AI module pre-trained, fine-tuned and configured to understand the context of a prompt, compare it against the configured parental requirement and quantify risk by interacting with one or more subsequent questions. The processing subsystem includes a query treatment module operatively coupled to the context based query assessment module wherein the query treatment module is a Neural Network with attention based AI module pre-trained, fine-tuned and configured to identify one or more risks associated with the prompt. Further, the query treatment module is configured to mitigate the one or more risks from the at least one child perspective based on a context of the intended action and parental requirement defined. Furthermore, the processing subsystem also includes a child prompt engineering module operatively coupled to the query treatment module wherein the child prompt engineering module is a Neural Network with attention based AI module pre-trained, fine-tuned and configured to monitor the query and review the response from the downstream AI module for any predefined risk or risks defined by parental requirement, modify or block the response based on the policy defined. Moreover, the processing subsystem includes a parental alert module operatively coupled to the child prompt engineering module wherein the parental alert module is configured to alert the parent in the presence of the risk thereby ensuring parental control in the generative artificial intelligence governance platform.
[0019] FIG. 1 is a block diagram representation of a system for parental control in a generative artificial intelligence governance platform in accordance with an embodiment of the present disclosure. The system (10) includes at least one processor (20) in communication with a client processor (30). The processor (20) generally refers to a computational unit or central processing unit (CPU) or Graphical Processing Unit (GPU) or a hybrid combination responsible for executing instructions in a computer system. The phrase "in communication with a client processor" implies that there is a relationship or interaction between at least one processor and a specific type of processor referred to as a "client processor." Here, the term "client processor" refer to a processor that initiates requests or tasks and interacts with another processor (which may be a server processor) to fulfil those requests.
[0020] The system (10) also includes at least one memory (40) includes a set of program instructions in the form of a processing subsystem (50), configured to be executed by the at least one processor. The processing subsystem (50) is hosted on a server (55) and configured to execute on a network (not shown in FIG. 1) to control bidirectional communications among a plurality of modules. As used herein, the memory (40) is a storage component within the system used for storing data and instructions that can be accessed by the processor. It executes a sequence of commands or directions written in a programming language that can be executed by a computer. In one embodiment, the server (55) may include a cloud server. In another embodiment, the server (55) may include a local server. The processing subsystem (50) is configured to execute on the network to control bidirectional communications among a plurality of modules. In one embodiment, the network may include a wired network such as local area network (LAN). In another embodiment, the network may include a wireless network such as Wi-Fi, Bluetooth, Zigbee, near field communication (NFC), infra-red communication (RFID) or the like.
[0021] The processing subsystem (50) includes a registration module (60) configured to receive credential information from a parent to register at least one child in the generative artificial intelligence governance platform. The credential information is typically the information that is required to create an account or gain access to the system (10). Examples of the credential information includes, but is not limited to, username, password, email address, personal information, security questions and/or answers and verification codes. The use of credentials enhances security and minimizes unauthorized access. Further, the parent enters the credential information via a user interface configured on a user device (not shown in FIG. 1). Examples of the user device includes, but is not limited to, a personal computer (PC), a mobile phone, a tablet device, a personal digital assistant (PDA), a smart phone, a laptop, and pagers.
[0022] As used herein, the generative artificial intelligence governance platform refers to a system designed to regulate and manage the deployment and use of artificial intelligence systems. Specifically, the artificial intelligence system is generative in nature. It will be appreciated to those skilled in the art that the ‘generative in nature’ refers to the ability of the artificial intelligence system to generate or create new data.
[0023] The processing subsystem (50) includes a parental consent module (65) operatively coupled to the registration module (60) wherein the parental consent module (65) is configured to receive authentication information based on the registration of the at least one child for an intended action. Upon successful registration of the child, the parent is required to provide details pertaining to the intended action of the child to use the generative artificial intelligence governance platform. It must be noted that a guardian of the child may also provide the details. The intended action refers to the purpose of using the generative artificial intelligence governance platform by the child. Further, the intended action can vary depending on specific features and functionalities provided by the generative artificial intelligence governance platform. For instance, the intended action includes, but is not limited to education, entertainment, learning, social interaction, personal development.
[0024] In one embodiment, the parental consent module module (65) is configured to receive and process verifiable consent which are cryptographic signatures of parents or guardians or teens approved for consenting, for a foundational AI model based service, acceptable levels of data and model usage, along with applicable regulatory requirement. Specifically, the acceptable levels of data and model usage defines the boundaries for retrieving information. Further, the acceptable levels of data and model usage refers to the guidelines or standards in which the data should be utilized to train and fine-tune the foundation AI model. These acceptable levels are associated with applicable regulatory requirements to address various concerns such as data privacy, transparency, accountability and safety.
[0025] Further, the cryptographic signature for consent provides a secure and reliable way to obtain and verify the consent. It offers digitally signing consent documents or agreements using cryptographic techniques to ensure the integrity, authenticity, and non-repudiation of the consent.
[0026] The processing subsystem (50) includes a parental requirement capture module (70) operatively coupled to the parental consent module (65) wherein the parental requirement capture module (70) is configured to allow the parent to specify a plurality of prerequisite conditions, automated child harm related risk identification for the intended use, corresponding regulatory requirements and acceptable automated risk mitigation and alert services wherein the plurality of prerequisite conditions are subjected to modification based on the age of the child over time. For instance, the child may fetch the top ten movies, however, the boundary can be set for movies rated for below 18 years. This boundary can then be modified as the child grows. The alert services typically notifies the parent about specific events, anomalies or conditions that require attention or action. These alerts serve to keep the parent informed and enable them to respond promptly to relevant information or changes in the generative artificial intelligence governance platform.
[0027] In one embodiment, the parental requirement capture module (70) is configured to add or remove or update applicable regulatory requirements which may include various state, national or regional regulatory requirements through drag and drop action of regulatory requirement modification. The state, national or regional regulatory requirements refers to the laws, regulations, standards and guidelines established by governmental authorities to govern the use of artificial intelligence technologies withing their respective jurisdictions. For example, Children’s Online Privacy Protection Act (COPPA) in United States, General Data Protection Regulation (GDPR) in European Union and Children’s Code in United Kingdom. The drag and drop action is typically a user interface feature that allows the parent to easily modify or update the regulatory requirements by dragging and dropping elements or components withing the generative artificial intelligence governance platform’s interface.
[0028] The processing subsystem (50) includes a context based query assessment module (75) operatively coupled to the parental requirement capture module (70) wherein the context based query assessment module (75) is a Neural Network with attention based AI module pre-trained, fine-tuned and configured to understand the context of a prompt, compare it against the configured parental requirement and quantify risk by interacting with one or more subsequent questions. Frequently, the prompt entered by the child is not accurate. In such a scenario, the context based query assessment module (75) is configured to ask subsequent questions to the child thereby creating an interaction to understand the context and intention of the prompt. The prompt is then altered accordingly and sent to the generative artificial intelligence governance platform.
[0029] The present disclosure includes usage of small language models (SLM) with pre-training and fine-tuning capabilities, empowering new possibilities for specialized applications in various fields at lesser costs. Small language model may have lesser number of parameters, lesser layers, smaller layers sizes, lesser attention heads and hence may have lesser compute requirement and hence more cost effective, sustainable and environment friendly for the task of ensuring children’s safety while using Generative AI. The neural network with attention mechanism used in one embodiment may be a small language model, which is an efficient architecture compared to the popular large language model, where the steps involved may be selection of suitable small language model, pretraining on domain specific data, fine-tuning for specialized task and customization and optimization for specific purpose of ensuring children’s safety while using Generative AI through prevention of harms in the prompts, responses & information flow, enabling efficient and effective solutions tailored to unique requirements of children’s interaction with Gen AI models including compliance with children specific regulation. Prior to fine-tuning for ensuring children’s safety while using Generative AI the selected small language model undergoes pre-training on domain-specific datasets of privacy preserved or synthetic children’s data. This pre-training phase enables the model to learn domain-specific features of children specific risks, vocabulary, and contextual understanding, enhancing its performance and adaptability to ensure children’s safety while using Generative AI. Following pre-training, the small language model is fine-tuned using datasets of various types of children’s safety related prompts. Fine-tuning involves adjusting the model's parameters and updating its weights to optimize performance in understanding harm for children while using Generative AI. By leveraging the knowledge gained during pre-training, the model can quickly adapt to the nuances of ensuring children’s safety while using Generative AI and achieve superior performance with limited data. The disclosed system includes customization and optimization of the fine-tuning process for the selected small language model to suit the requirements of ensuring children’s safety while using Generative AI . This may involve adjusting hyperparameters, selecting appropriate training strategies for various sub-modules with various multi-heads of attention, or incorporating domain-specific information for ensuring children’s safety while using Generative AI enabling efficient solutions tailored to unique requirements of parental consent implementation and regulatory requirements to meet children’s safety aspects while using Generative AI .
[0030] In one embodiment, the context based query assessment module (75) is configured to identify potentially harmful prompts or responses not suitable for consumption by children or restricted by parents, including harmful words and CSAM content, which may be identified based on assessment of various types of prompts including text, images, audio and video using fine-tuned foundational AI module. At times, the response from the generative artificial intelligence governance platform is not suitable for the child. In such a scenario, the context based query assessment module (75) is configured to appropriately alter the response based on the child’s age.
[0031] The processing subsystem (50) includes a query treatment module (80) operatively coupled to the context based query assessment module (75) wherein the query treatment module (80) is a Neural Network with attention based AI module pre-trained and fine-tuned. The query treatment module (80) is configured to identify one or more risks associated with the prompt. The one or more risks are potential dangers or hazards associated with the prompts or suggestions provided by the AI system to the child. Such risks may manifest in various forms, such as personal risk, financial risk, medical risk, cardinal risk, vehicle risk, consumable risk, language risk, education risk, event risk, organizational risk, security risk, legal risk, social risk, confidential risk, Not Safe for Work risk, weapon risk, technological risk, date-time risk and location risk. It must be noted that the said risks are associated with the child’s perspective. Further, these risks can vary depending on the nature of the prompts and the context in which they are presented. Furthermore, the query treatment module (80) is configured to mitigate the one or more risks from the at least one child perspective based on a context of the intended action and parental requirement defined. In other words, certain risks may be relevant to the child but not to the parent. It must be noted that the query treatment module (80) is configured to identify the risks at a word level.
[0032] Upon identifying the risks, the prompt is modified and sent to a downstream AI module (95).
[0033] In one embodiment, query treatment module (80) is configured to identify and mitigate various kinds of risks including, Child Sexual Abuse Material (CSAM) risk, financial risk, privacy risk, security risk, health risk and other configured risks as per regulatory requirements.
[0034] In another embodiment, the query treatment module (80) is configured to receive human feedback from the human in the loop, who is monitoring interaction between the child and the generative AI system, based on parental configuration, forwarded to seek feedback either from parents or an external assigned entity, for unknown or pre-defined risks of certain types or of lower confidence of risk classification by the system.
[0035] The processing subsystem (50) includes a child prompt engineering module (85) operatively coupled to the query treatment module (80) wherein the child prompt engineering module (85) is a Neural Network with attention based AI module pre-trained, fine-tuned and configured to monitor the query and review the response from the downstream AI module (95) for any predefined risk or risks defined by parental requirement, modify or block the response based on the policy defined. For instance, consider the prompt is an algebraic question. The response to this prompt must be in an expression such that the child can easily comprehend.
[0036] As used herein, the phrase “pre-trained, fine-tuned and configured” refers to various stages pertaining to the deployment and deployment of the AI model. ‘Pre-trained’ signifies that the AI model has been initially trained using a large set of training data to understand/ analyze patterns, features or representations on an input data. Typically, the dataset encompasses a diverse range of data. It must be noted that the pre-trained stage allows the AI model to be further customized for specific tasks. ‘Fine-tuned’ refers to the stage or process of further training the said pre-trained model based on task-specific or domain-specific dataset. This allows the AI model to leverage its understanding/ analysis that was performed during the pre-training stage to meet the tasks requirement. Fine-tuning also helps to improve the performance and accuracy of the pre-trained models. Similarly, ‘configured’ refers to the process of customizing or setting up the platform according to specific requirements, preferences, or parameters.
[0037] In another embodiment, the query treatment module (80) is trained with a plurality of words or information that is sensitive to the child.
[0038] The processing subsystem (50) includes a parental alert module (90) operatively coupled to the child prompt engineering module (85) wherein the parental alert module (90) is configured to alert the parent in the presence of the risk thereby ensuring parental control in the generative artificial intelligence governance platform. The parent is allowed to block the response from the downstream AI module (95) at the occurrence of identifying the one or more risks.
[0039] In one embodiment, the child is also informed that his/her parent has been notified about the potential risk.
[0040] In one embodiment, the parental alert module (90) is configured to notify all the identified risks, mitigations and a periodic regulatory report as per the configuration and regulatory requirement.
[0041] In another embodiment, the parental alert module (90) is configured to lock the at least one child to access the generative artificial intelligence governance platform upon identifying the presence of the risk.
[0042] In yet another embodiment, the parental alert module (90) is configured to notify the at least one child with regard to the alert directed to the parent.
[0043] FIG. 2 is a block diagram representation of an embodiment of the system for parental control in a generative artificial intelligence governance platform of FIG. 1 in accordance with an embodiment of the present disclosure. In one embodiment, the system (10) includes a database module (100) operatively coupled to the parental alert module (90). The database module (100) is configured to store the prompts entered by the at least one child, the outcome of the prompts and the one or more risks associated with the prompts and responses along with recommended mitigation.
[0044] In one embodiment, the system (10) includes a storage module (105) operatively coupled to the context based query assessment module (75) wherein the storage module (105) is configured to store the history of prompts and responses along with identified risk.
[0045] Consider a real-world scenario where an artificial intelligence governance platform is deployed in a home environment. The registration module (60) allows a parent registers his/her child with credential information to use the generative artificial intelligence governance platform. In one embodiment, the credential information includes the username and password. Upon successful registration, the parental consent module (65) enables the parent provides authentication information for an intended action of the child . Consider that the intended action is for entertainment. Entertainment refers to activities, content, or experiences designed to engage, amuse, and stimulate the child. It encompasses a wide range of media, including television shows, movies, video games, books, music, toys, and interactive experiences. However, there are also potential risks associated with certain forms of media or content. Therefore, the parental requirement capture module (70) allows the parent to specify the boundaries, automated child harm related risk identification, regulatory requirements, risk mitigation and alert services associated with the intended action. The parent can modify the said boundaries as the child grows. Now, consider that the child enters a prompt requesting for ‘movie’. The prompt is not accurate. In such a scenario, the context based query assessment module (75) understands the context of the prompt, compares it with the parental requirement and quantifies one or more potential risks. This is done by interacting with the child by asking subsequent questions. For instance, the questions can pertain to the genre of movie, specific actors, time duration of the movie and the like. During this interaction, the prompt is modified and is ready to be send to a downstream foundational model. The modification of the prompt ensures that an accurate response is obtained. Subsequently, one or more risks associated with the prompt is identified by the query treatment module (80). The one or more risks are identified based on each word. Further, the one or more risks are identified based on a context of the intended action and parental requirement defined from the child’s perspective. The prompt is now sent to the downstream foundational model. The child prompt engineering module (85) reviews a response obtained from the downstream foundational model. The review focuses on any predefined risk or risks defined by parental requirement, modify or block the response based on the policy defined. For instance, the response may include adult content. In such a scenario, parent alert module (90) alerts the parent about the presence of the risk thereby ensuring parental control in the generative artificial intelligence governance platform.
[0046] FIG. 3 is a block diagram of a computer or a server in accordance with an embodiment of the present disclosure. FIG. 3 is a block diagram of a computer or a server in accordance with an embodiment of the present disclosure. The server (300) includes processor(s) (330), and memory (310) operatively coupled to the bus (320). The processor(s) (330), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
[0047] The memory (310) includes several subsystems stored in the form of executable program which instructs the processor (330) to perform the method steps illustrated in FIG. 1. The memory (310) includes a processing subsystem (50) of FIG. 1. The processing subsystem (50) further has following modules: a registration module (60), a parental consent module (65), a parental requirement capture module (70), a context based query assessment module (75), a query treatment module (80), a prompt engineering module (85) and a parental alert module (90).
[0048] The registration module (60) is configured to receive credential information from a parent to register at least one child in the generative artificial intelligence governance platform. The parental consent module (65) is operatively coupled to the registration module (60) wherein the parental consent module (65) is configured to receive authentication information based on the registration of the at least one child for an intended action. The parental requirement capture module (70) is operatively coupled to the parental consent module (65) wherein the parental requirement capture module (70) is configured to allow the parent to specify a plurality of prerequisite conditions, automated child harm related risk identification for the intended use, corresponding regulatory requirements and acceptable automated risk mitigation and alert services wherein the plurality of prerequisite conditions are subjected to modification based on the age of the child over time. The context based query assessment module (75) is operatively coupled to the parental requirement capture module (70) wherein the context based query assessment module (75) is a Neural Network with attention based AI module pre-trained, fine-tuned and configured to understand the context of a prompt, compare it against the configured parental requirement and quantify risk by interacting with one or more subsequent questions. The query treatment module (80) is operatively coupled to the context based query assessment module (75) wherein the query treatment module (80) is a Neural Network with attention based AI module pre-trained, fine-tuned and configured to identify one or more risks associated with the prompt. Further, the query treatment module (80) is configured to mitigate the one or more risks from the at least one child perspective based on a context of the intended action and parental requirement defined. The child prompt engineering module (85) is operatively coupled to the query treatment module (80) wherein the child prompt engineering module (85) is a Neural Network with attention based AI module pre-trained, fine-tuned and configured to monitor the query and review the response from the downstream AI module (95) for any predefined risk or risks defined by parental requirement, modify or block the response based on the policy defined. Further, the parental alert module (90) operatively coupled to the child prompt engineering module (85) wherein the parental alert module (90) is configured to alert the parent in the presence of the risk thereby ensuring parental control in the generative artificial intelligence governance platform.
[0049] The bus (220) as used herein refers to be internal memory channels or computer network that is used to connect computer components and transfer data between them. The bus (220) includes a serial bus or a parallel bus, wherein the serial bus transmits data in bit-serial format and the parallel bus transmits data across multiple wires. The bus (220) as used herein, may include but not limited to, a system bus, an internal bus, an external bus, an expansion bus, a frontside bus, a backside bus and the like.
[0050] FIG. 4(a) illustrates a flow chart representing the steps involved in a method for parental control in a generative artificial intelligence governance platform in accordance with an embodiment of the present disclosure. FIG. 4(b) illustrates continued steps of method for parental control in a generative artificial intelligence governance platform of FIG. 4(a) in accordance with an embodiment of the present disclosure. The method (400) includes receiving, by a registration module of a processing subsystem, credential information from a parent to register at least one child in the generative artificial intelligence governance platform in step (405).
[0051] The method (400) includes receiving, by a parental consent module, authentication information based on the registration of the at least one child for an intended action in step (410).
[0052] In one embodiment, the method (400) includes receiving and processing verifiable consent which are cryptographic signatures of parents or guardians or teens approved for consenting, for a foundational AI model based service, acceptable levels of data and model usage, along with applicable regulatory requirement.
[0053] The method (400) includes allowing, by a parental requirement capture module, the parent to specify a plurality of prerequisite conditions and automated child harm related risk identification for the intended use, corresponding regulatory requirements and acceptable automated risk mitigation and alert services wherein the plurality of prerequisite conditions are subjected to modification based on the age of the child over time in step (415).
[0054] The method (400) includes adding or removing or updating applicable regulatory requirements which may include various state, national or regional regulatory requirements through drag and drop action of regulatory requirement modification.
[0055] The method (400) includes understanding, by a context based query assessment module, the context of a prompt, comparing the context against the configured parental requirement and quantifying a risk by interacting with one or more subsequent questions in step (420).
[0056] In one embodiment, the method (400) includes identifying potentially harmful prompts or responses not suitable for consumption by children or restricted by parents, including harmful words and CSAM content, which may be identified based on assessment of various types of prompts including text, images, audio and video using fine-tuned foundational AI module.
[0057] In yet another embodiment, the method (400) includes storing the history of prompts and responses along with identified risk.
[0058] The method (400) includes identifying, by a query treatment module, presence of one or more risks associated with the prompt in step (425).
[0059] The method (400) includes mitigating, by the query treatment module, the one or more risks from the child perspective based on a context of the intended action and parental requirement defined in step (430).
[0060] In one embodiment, the method (400) includes identifying and mitigating various kinds of risks including, CSAM risk, financial risk, privacy risk, security risk, health risk and other configured risks as per regulatory requirements.
[0061] The method (400) includes monitoring, by a prompt engineering module, the query and review the response from the downstream AI module for any predefined risk or risks defined by parental requirement, modify or block the response based on the policy defined in step (435).
[0062] The method (400) includes alerting, by a parental alert module, the parent in the presence of the risk thereby ensuring parental control in the generative artificial intelligence governance platform in step (440).
[0063] In one embodiment, the method (400) includes notifying all the identified risks, mitigations and a periodic regulatory report as per the configuration and regulatory requirement.
[0064] In another embodiment, the method (400) includes locking the at least one child to access the generative artificial intelligence governance platform upon identifying the presence of the risk.
[0065] In yet another embodiment, the method (400) includes receiving human feedback from the human in the loop, who is monitoring interaction between the child and the generative AI system, based on parental configuration, forwarded to seek feedback either from parents or an external assigned entity, for unknown or pre-defined risks of certain types or of lower confidence of risk classification by the system.
[0066] In yet another embodiment, the method (400) includes storing the prompts entered by the at least one child, the outcome of the prompts and the one or more risks associated with the prompts and responses along with recommended mitigation.
[0067] Various embodiments of the present disclosure provides a system for parental control in a generative artificial intelligence governance platform. The parental requirement capture module (70) allow the parent to specify a plurality of prerequisite conditions, automated child harm related risk identification for the intended use, corresponding regulatory requirements and acceptable automated risk mitigation and alert services, ensuring that accurate and effective parental control is created. Further, the parental requirement capture module (70) allows the parent to modify the boundaries, making it timely as the child grows. Further, the content based query assessment module (75) interacts with the child with one or more subsequent questions thereby ensuring that the prompt is precise. The query treatment module (80) identifies and mitigates one or more risks present in the prompt, thereby promoting responsible content creation and distribution that prioritizes the child’s well-being. The child prompt engineering module (85) monitors the query and reviews the response from the downstream AI module (95) for any predefined risk or risks defined by parental requirement. Further, the response is blocked or modified based on the policy defined, thereby eliminating inappropriate content to the child.
[0068] Further, the system provides better parental control and frictionless use of generative AI platform by children for learning and improving their creativity.
[0069] The present invention discloses neural network with attention modules tailored for control of prompts or information flow to ensure children’s safety by capture requirements of parents and regulations and implementing it in the shared data or information flow or prompts for responsible AI deployment and usage. The disclosed neural network with attention mechanism based AI system for children’s safety while using Generative AI will act like a key decision making component in an AI proxy or a firewall between children and the downstream generative AI models and data exchanges by governing the bidirectional information flow including prompts, responses and information exchange. Thus providing safe, secure and responsible AI experience and information exchange for children.
[0070] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the disclosure and are not intended to be restrictive thereof.
[0071] While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
[0072] The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, the order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.
,CLAIMS:1. A system (10) for parental control in a generative artificial intelligence governance platform comprising:
at least one processor (20) in communication with a client processor (30); and
at least one memory (40) comprises a set of program instructions in the form of a processing subsystem (50), configured to be executed by the at least one processor (20), wherein the processing subsystem (50) is hosted on a server (55) and configured to execute on a network to control bidirectional communications among a plurality of modules comprising:
a registration module (60) configured to receive credential information from a parent to register at least one child in the generative artificial intelligence governance platform;
a parental consent module (65) operatively coupled to the registration module (60) wherein the parental consent module (65) is configured to receive authentication information based on the registration of the at least one child for an intended action;
a parental requirement capture module (70) operatively coupled to the parental consent module (65) wherein the parental requirement capture module (70) is configured to allow the parent to specify a plurality of prerequisite conditions, automated child harm related risk identification for the intended use, corresponding regulatory requirements and acceptable automated risk mitigation and alert services wherein the plurality of prerequisite conditions are subjected to modification based on the age of the child over time;
a context based query assessment module (75) operatively coupled to the parental requirement capture module (70) wherein the context based query assessment module (75) is a Neural Network with attention based artificial intelligence module pre-trained, fine-tuned and configured to understand the context of a prompt, compare it against the configured parental requirement and quantify risk by interacting with one or more subsequent questions;
a query treatment module (80) operatively coupled to the context based query assessment module (75) wherein the query treatment module (80) is a Neural Network with attention based artificial intelligence module pre-trained, fine-tuned and configured to:
identify one or more risks associated with the prompt; and
mitigate the one or more risks from the at least one child perspective based on a context of the intended action and parental requirement defined;
a child prompt engineering module (85) operatively coupled to the query treatment module (80) wherein the child prompt engineering module (85) is a Neural Network with attention based artificial intelligence module pre-trained, fine-tuned and configured to monitor the query and review the response from a downstream artificial intelligence module (95) for any predefined risk or risks defined by parental requirement, modify or block the response based on the policy defined; and
a parental alert module (90) operatively coupled to the child prompt engineering module (85) wherein the parental alert module (90) is configured to alert the parent in the presence of the risk thereby ensuring parental control in the generative artificial intelligence governance platform.
2. The system (10) as claimed in claim 1, wherein the parental consent module module (65) is configured to receive and process verifiable consent which are cryptographic signatures of parents or guardians or teens approved for consenting, for a foundational AI model based service, acceptable levels of data & model usage, along with applicable regulatory requirement.
3. The system (10) as claimed in claim 1, wherein the parental requirement capture module (70) is configured to add or remove or update applicable regulatory requirements which may include various state, national or regional regulatory requirements through drag and drop action of regulatory requirement modification.
4. The system (10) as claimed in claim 1, wherein the context based query assessment module (75) is configured to identify potentially harmful prompts or responses not suitable for consumption by children or restricted by parents, including harmful words and Child Sexual Abuse Material content, which may be identified based on assessment of various types of prompts including text, images, audio and video using fine-tuned foundational AI module.
5. The system (10) as claimed in claim 1, wherein the parental alert module (90) is configured to notify all the identified risks, mitigations and a periodic regulatory report as per the configuration and regulatory requirement.
6. The system (10) as claimed in claim 1, wherein the query treatment module (80) is configured to identify and mitigate various kinds of risks including, Child Sexual Abuse Material risk, financial risk, privacy risk, security risk, health risk and other configured risks as per regulatory requirements.
7. The system (10) as claimed in claim 1, wherein the parental alert module (90) is configured to:
lock the at least one child to access the generative artificial intelligence governance platform upon identifying the presence of the risk; and
notify the at least one child with regard to the alert directed to the parent.
8. The (10) system as claimed in claim 1, wherein the query treatment module (80) is configured to receive human feedback from the human in the loop, who is monitoring interaction between the child and the generative AI system, based on parental configuration, forwarded to seek feedback either from parents or an external assigned entity, for unknown or pre-defined risks of certain types or of lower confidence of risk classification by the system.
9. The system (10) as claimed in claim 1, comprising a database module (100) operatively coupled to the parental alert module (90) wherein the database module (100) is configured to store the prompts entered by the at least one child, the outcome of the prompts and the one or more risks associated with the prompts and responses along with recommended mitigation.
10. The system (10) as claimed in claim 1, comprising a storage module (105) operatively coupled to the context based query assessment module (75) wherein the storage module (105) is configured to store the history of prompts and responses along with identified risk.
11. A method (400) for parental control in a generative artificial intelligence governance platform comprising:
receiving, by a registration module of a processing subsystem, credential information from a parent to register at least one child in the generative artificial intelligence governance platform; (405)
receiving, by a parental consent module, authentication information based on the registration of the at least one child for an intended action; (410)
allowing, by a parental requirement capture module, the parent to specify a plurality of prerequisite conditions and automated child harm related risk identification for the intended use, corresponding regulatory requirements and acceptable automated risk mitigation and alert services wherein the plurality of prerequisite conditions are subjected to modification based on the age of the child over time; (415)
understanding, by a context based query assessment module, the context of a prompt, comparing the context against the configured parental requirement and quantifying a risk by interacting with one or more subsequent questions; (420)
identifying, by a query treatment module, presence of one or more risks associated with the prompt; (425)
mitigating, by the query treatment module, the one or more risks from the child perspective based on a context of the intended action and parental requirement defined; (430)
monitoring, by a child prompt engineering module, the query and review the response from the downstream AI module for any predefined risk or risks defined by parental requirement, modify or block the response based on the policy defined; (435) and
alerting, by a parental alert module, the parent in the presence of the risk thereby ensuring parental control in the generative artificial intelligence governance platform. (440)
Dated this 08th day of April, 2024
Signature
Jinsu Abraham
Patent Agent (IN/PA3267)
Agent for the Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202341026577-STATEMENT OF UNDERTAKING (FORM 3) [10-04-2023(online)].pdf | 2023-04-10 |
| 2 | 202341026577-PROVISIONAL SPECIFICATION [10-04-2023(online)].pdf | 2023-04-10 |
| 3 | 202341026577-PROOF OF RIGHT [10-04-2023(online)].pdf | 2023-04-10 |
| 4 | 202341026577-POWER OF AUTHORITY [10-04-2023(online)].pdf | 2023-04-10 |
| 5 | 202341026577-FORM FOR STARTUP [10-04-2023(online)].pdf | 2023-04-10 |
| 6 | 202341026577-FORM FOR SMALL ENTITY(FORM-28) [10-04-2023(online)].pdf | 2023-04-10 |
| 7 | 202341026577-FORM 1 [10-04-2023(online)].pdf | 2023-04-10 |
| 8 | 202341026577-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [10-04-2023(online)].pdf | 2023-04-10 |
| 9 | 202341026577-EVIDENCE FOR REGISTRATION UNDER SSI [10-04-2023(online)].pdf | 2023-04-10 |
| 10 | 202341026577-FORM-26 [13-10-2023(online)].pdf | 2023-10-13 |
| 11 | 202341026577-DRAWING [08-04-2024(online)].pdf | 2024-04-08 |
| 12 | 202341026577-CORRESPONDENCE-OTHERS [08-04-2024(online)].pdf | 2024-04-08 |
| 13 | 202341026577-COMPLETE SPECIFICATION [08-04-2024(online)].pdf | 2024-04-08 |
| 14 | 202341026577-Power of Attorney [15-04-2024(online)].pdf | 2024-04-15 |
| 15 | 202341026577-FORM28 [15-04-2024(online)].pdf | 2024-04-15 |
| 16 | 202341026577-FORM-9 [15-04-2024(online)].pdf | 2024-04-15 |
| 17 | 202341026577-Covering Letter [15-04-2024(online)].pdf | 2024-04-15 |
| 18 | 202341026577-STARTUP [18-04-2024(online)].pdf | 2024-04-18 |
| 19 | 202341026577-FORM28 [18-04-2024(online)].pdf | 2024-04-18 |
| 20 | 202341026577-FORM 18A [18-04-2024(online)].pdf | 2024-04-18 |
| 21 | 202341026577-FER.pdf | 2024-05-10 |
| 22 | 202341026577-FORM 3 [08-08-2024(online)].pdf | 2024-08-08 |
| 23 | 202341026577-OTHERS [08-11-2024(online)].pdf | 2024-11-08 |
| 24 | 202341026577-FORM 3 [08-11-2024(online)].pdf | 2024-11-08 |
| 25 | 202341026577-FER_SER_REPLY [08-11-2024(online)].pdf | 2024-11-08 |
| 26 | 202341026577-PatentCertificate21-07-2025.pdf | 2025-07-21 |
| 27 | 202341026577-IntimationOfGrant21-07-2025.pdf | 2025-07-21 |
| 1 | Search202341026577E_08-05-2024.pdf |
| 2 | 202341026577_SearchStrategyAmended_E_search_202341026577AE_14-02-2025.pdf |