Abstract: A system (100) and method (300) for generating hyper-personalized responses are disclosed. The system (100) comprises a query receiving interface (108) configured to receive a query from an input unit (104) of a client device (102) and a processing unit (112). The processing unit (112) analyzes the query attributes using a gating agent (114) and determines a routing mechanism to select parallel processing using one or more router agents (116a-116m). Responses are generated using one or more functional agents (118a-118n), including a social action agent (200), an analytics agent (202), a creative agent (206), a performance agent (208), a query agent (210), a validation agent (216), a performance action agent (204), a search engine optimization agent (214), or a combination thereof. A linking agent (120) orchestrates the generated responses, and a final response is generated.
DESC:BACKGROUND
Field of Invention
[001] Embodiments of the present invention generally relate to an AI-based computational engine capable of delivering highly personalized responses, prescriptions and execution, acting as Human Domain expert Agents by integrating specialized expert models and using AI.
Description of Related Art
[002] In the digital age, businesses and individuals heavily rely on online platforms for marketing, advertising, and audience engagement. The ability to generate personalized responses and targeted campaigns is critical to ensuring effective communication and maximizing engagement. However, existing systems for digital marketing, advertisement placement, and campaign optimization lack the intelligence to deliver hyper-personalized responses efficiently.
[003] Traditional advertisement and campaigning methods rely on predefined templates, rule-based targeting, and basic audience segmentation. These methods often fail to capture real-time user intent, adapt to dynamic user interactions, or optimize response generation based on contextual factors. Moreover, existing solutions do not integrate multiple functional components, such as social engagement, analytics-driven insights, performance evaluation, and search engine optimization, into a single, cohesive system. Current systems also struggle with efficiently routing queries to the most relevant processing agents, leading to inefficient campaign execution and suboptimal advertisement placement. Without an intelligent mechanism to analyze user queries, determine the best course of action, and orchestrate multi-agent processing, businesses and advertisers face limitations in delivering personalized and contextually relevant responses.
[004] There is currently no comprehensive solution that seamlessly integrates multi-agent processing, reinforcement learning-based feedback mechanisms, and dynamic campaign execution to improve advertisement and marketing efficiency. Accordingly, there is a lack for a system and method that can intelligently analyze queries, route them through parallel processing, generate hyper-personalized responses, and continuously learn from user interactions to optimize future campaigns. There is thus a need for an improved and advanced AI-based computational engine that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
[005] Embodiments in accordance with the present invention provide a system for delivering hyper-personalized responses, comprising a query receiving interface configured to receive a query from an input unit of a client device; a processing unit installed on a Docker environment coupled to the query receiving interface, and configured to analyze attributes of the received query using a gating agent; determine a routing mechanism to route the query based on the analyzed attributes, wherein the routing mechanism is configured to select parallel processings on the query using one or more router agents; generate responses using one or more functional agents based on the selected parallel processings on the query using the one or more router agents, wherein the one or more functional agents are selected from a social action agent, an analytics agent, a creative agent, a performance agent, a query agent, a validation agent, a performance action agent, a search engine optimization agent, or a combination thereof; orchestrate the generated responses using a linking agent; generate a final response based on the orchestrated responses; and continuously learn and improve future responses based on user feedback or interactions upon generation of the final response for the received query.
[006] Embodiments in accordance with the present invention provide a method for delivering hyper-personalized responses, comprising receiving a query from an input unit using a query receiving interface; analyzing attributes of the received query using a gating agent; determining a routing mechanism to route the query based on the analyzed attributes, wherein the routing mechanism is configured to select parallel processings on the query using one or more router agents; generating responses using one or more functional agents based on the selected parallel processings on the query using the one or more router agents, wherein the one or more functional agents are selected from a social action agent, an analytics agent, a creative agent, a performance agent, a query agent, a validation agent, a performance action agent, a search engine optimization agent, or a combination thereof; orchestrating the generated responses using a linking agent; and generating a final response based on the orchestrated responses.
[007] Embodiments in accordance with the present application may further provide a system leveraging the integration of multiple functional agents that work in parallel like Human Domain expert Agents to generate highly contextual and hyper-personalized responses, thereby improving the efficiency and accuracy of response generation.
[008] Embodiments in accordance with the present application may further provide a system leveraging the use of reinforcement learning and user feedback to dynamically refine query routing, agent selection, and response quality for continuous improvement in system performance. Embodiments in accordance with the present application may further provide a system leveraging an intelligent gating mechanism to determine whether a query requires single-agent processing or multi-agent execution by optimizing resource allocation and reducing processing overhead. Embodiments in accordance with the present application may further provide a system leveraging an orchestration engine that ranks, summarizes, and optimizes generated responses for coherent and high-quality final outputs.
[009] Embodiments in accordance with the present application may further provide a system leveraging real-time data analytics and performance tracking by incorporating Human Domain expert Agents to enhance search engine optimization, advertisement targeting, and content engagement strategies. Embodiments in accordance with the present application may further provide a system leveraging automated performance-based actions by Human Domain expert Agents, such as modifying advertisements, adjusting marketing bids, and optimizing online presence, to enhance digital marketing efficiency and effectiveness.
[0010] As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0012] FIG. 1 illustrates a system for generating hyper-personalized responses, according to an embodiment of the present invention;
[0013] FIG. 2 illustrates a docker environment, according to an embodiment of the present invention;
[0014] FIG. 3 illustrates a flowchart a method for generating hyper-personalized responses, according to an embodiment of the present invention; and
[0015] FIG. 4 illustrates a method for generating hyper-personalized responses by incorporating user feedbacks. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
[0016] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto.
[0017] In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like. As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0018] Embodiments of the present invention provide a system 100 for generating hyper-personalized responses, prescriptions, and executions, acting as Human Domain Expert Agents by integrating specialized expert models and using Artificial Intelligence (AI). The system 100 may be configured to generate hyper-personalized responses based on a query received from an input unit 104 of a client device 102. The system 100 may be configured to execute human-like actions such as managing campaigns, posting automatically on social media, generating targeted advertisements, optimizing ad placements, performing A/B testing, dynamically adjusting content strategies, analyzing audience sentiment, predicting customer behavior, crafting personalized email campaigns, scheduling and automating email sequences, generating and optimizing landing pages, creating AI-generated video content, personalizing chat interactions, managing chatbot responses, auto-generating blog posts and articles, designing marketing visuals and infographics, running influencer marketing outreach, tracking campaign performance in real-time, adjusting budget allocations dynamically, automating lead generation workflows, segmenting audiences for targeted messaging, refining ad creatives based on engagement data, conducting automated competitive analysis, optimizing SEO strategies, automating press release generation, personalizing push notifications, managing affiliate marketing programs, automating loyalty reward distributions, generating multilingual campaign content, predicting viral content potential, integrating with CRM systems for enhanced targeting, detecting fraud and bot activity in digital ads, and executing sentiment-driven content modifications—based on queries received from the input unit 104 of the client device 102, without any human intervention.
[0019] According to an embodiment of the present invention, the client device 102 may be, but not limited to, a smartphone, a tablet, a laptop, a desktop computer, a smart speaker, a wearable device, an in-vehicle infotainment system, or any other computing device capable of transmitting queries and receiving responses over a network. The input unit 104 may be configured to allow a user to enter a query using various input modalities, including but not limited to, text input, voice commands, touch-based interactions, or gesture-based inputs. For example, a user may provide a query through the input unit 104 of the client device 102, such as "Create an advertisement for a new fitness tracker targeting young professionals."
[0020] In an embodiment of the present invention, the user may input the query into the system 100 through a user interface (UI) (not shown) displayed on the client device 102. The UI may be configured to provide an input field such as the user may enter text-based queries, select predefined query templates, or upload supporting files such as images or documents. In another embodiment, the system 100 may be configured to accept voice-based queries, wherein the input unit 104 of the client device 102 may include a microphone (not shown). The system 100 may further comprise a speech recognition module that may be configured to convert the voice input into text, which may then be processed similarly to text-based queries. In yet another embodiment, the system 100 may be configured to receive queries via an application programming interface (API) that allows third-party applications or platforms to communicate with the system 100. For example, a marketing automation tool may be configured to send advertisement-related queries to the system 100 for enabling seamless integration with existing workflows. In some embodiments, the system 100 may be configured to support multimodal input, such as a user may combine different input methods, such as providing a text description along with an image reference or a sample video.
[0021] The system 100 may include a query receiving interface 108, and a processing unit 112. The processing unit 112 may be installed on a Docker environment 110. The system 100 may further include non-limiting components such as a gating agent 114, one or more router agents 116a-116m, one or more functional agents 118a-118n, a linking agent 120, and third-party platforms 122.
[0022] In an embodiment of the present invention, the query receiving interface 108 may be a software-based component integrated within the system 100 and may be configured to facilitate communication between the client device 102 and the processing unit 112. The query receiving interface 108 may be implemented as a web-based portal, a mobile application, or an embedded module within an enterprise system. The query receiving interface 108 may be hosted on cloud-based servers or deployed on dedicated hardware, such as an edge computing device, to ensure low-latency data processing and secure handling of user queries.
[0023] The query receiving interface 108 may be configured to receive queries from the input unit 104 of the client device 102 in various formats, including but not limited to text input, voice input, structured data, or multimedia content. The query receiving interface 108 may also support real-time query validation, and may analyze the received query for completeness, formatting errors, or missing information before forwarding it to the processing unit 112. The query receiving interface 108 may utilize a dedicated hardware accelerator, such as a digital signal processor (DSP) or graphics processing unit (GPU), to enhance processing speed, particularly for voice and multimedia-based queries.
[0024] In another embodiment of the present invention, the query receiving interface 108 may be configured to preprocess the received query by extracting key attributes, such as keywords, intent, and context, using natural language processing (NLP) techniques. This preprocessing may enable efficient routing of the query by the one or more router agents 116a-116m. The query receiving interface 108 may be deployed on high-performance computing infrastructure to ensure scalable query handling, particularly in scenarios in which large volumes of queries are processed simultaneously. Upon receiving the query from the input unit 104, the query receiving interface 108 may be configured to generate metadata associated with the query, including timestamp, user identification, query category, and priority level. This metadata may be utilized by the processing unit 112 to optimize response generation and ensure efficient handling of multiple concurrent queries. The query receiving interface 108 may also incorporate a network interface controller (NIC) to support high-speed data transmission between the client device 102 and the processing unit 112, ensuring minimal latency in query reception and response processing.
[0025] In some embodiments of the present invention, the query receiving interface 108 may be configured to interact with external systems via an application programming interface (API), allowing integration with third-party applications, customer relationship management (CRM) tools, or digital marketing platforms. Such integration may enable automated query submission and streamline response workflows. The query receiving interface 108 may further include an embedded security module (not shown), such as a hardware security module (HSM) or trusted platform module (TPM), to ensure encrypted data transmission and protect user queries from unauthorized access or tampering. In an embodiment of the present invention, the Docker environment 110 may be a containerized platform configured to provide an isolated and scalable execution environment for the system 100. The Docker environment 110 may enable efficient deployment, management, and execution of software components within lightweight containers.
[0026] In an embodiment of the present invention, the Docker environment 110 may be hosted on cloud servers, on-premises data centers, or edge computing devices, depending on system requirements. The Docker environment 110 may be configured to facilitate resource allocation for different microservices within the system 100 to optimize performance for query processing, response generation, and data handling. The Docker environment 110 may further include a container orchestration platform (not shown), for example Kubernetes, to manage multiple containers dynamically and ensure high availability and fault tolerance. The Docker environment 110 may be integrated with a virtual network interface to enable secure communication between different software components operating within separate containers.
[0027] In another embodiment of the present invention, the Docker environment 110 may support automated scaling mechanisms, such as additional containers may be instantiated based on system load to maintain processing efficiency. It may also include built-in monitoring and logging tools to track system performance and detect potential failures. The Docker environment 110 may further provide compatibility with hardware accelerators such as GPUs or tensor processing units (TPUs) for enhanced computational efficiency, particularly in machine learning-based applications.
[0028] In an embodiment of the present invention, the processing unit 112 may be a high-performance computing module configured to execute various operations associated with query analysis, data processing, and response generation. The processing unit 112 may include one or more central processing units (CPUs), graphics processing units (GPUs), or tensor processing units (TPUs) to support high-speed data computation and inference tasks. The processing unit 112 may be configured to analyze incoming queries received via the query receiving interface 108 and execute appropriate algorithms for natural language processing (NLP), machine learning-based recommendation, or rule-based logic evaluation. The processing unit 112 may further be configured to access and retrieve relevant data from a database 126, process the retrieved information, and generate a hyper-personalized response. In another embodiment, the processing unit 112 may be configured to distribute computational tasks across multiple processing cores or nodes to enhance processing efficiency. The processing unit 112 may leverage parallel processing techniques to handle multiple queries concurrently. Additionally, the processing unit 112 may be integrated with dedicated memory modules, such as dynamic random-access memory (DRAM) or non-volatile memory express (NVMe) storage, to facilitate rapid data access and retrieval.
[0029] The processing unit 112 may further be configured to apply machine learning models trained on historical data to enhance the accuracy and relevance of generated responses. The processing unit 112 may utilize deep learning frameworks such as TensorFlow, PyTorch, or ONNX for advanced predictive modeling. The processing unit 112 may also include built-in security features, such as secure enclave technology or hardware-based encryption modules, to ensure data integrity and protect user queries from unauthorized access. In some embodiments, the processing unit 112 may be connected to an external data analytics engine to derive insights from user interactions, enabling continuous system improvement. The processing unit 112 may further support integration with cloud-based services for dynamic workload management and real-time processing of large-scale data queries.
[0030] The gating agent 114 may be configured to act as an access control mechanism, regulating the flow of queries into the system 100. The gating agent 114 may perform authentication and authorization checks before allowing queries to be processed by downstream components. In some embodiments, the gating agent 114 may implement security protocols such as OAuth, token-based authentication, or biometric verification to prevent unauthorized access.
[0031] The one or more router agents 116a-116m may be configured to dynamically distribute queries among available functional agents 118a-118n based on predefined routing logic. Each of the router agents 116a-116m may evaluate incoming queries and determine an optimal one from the functional agents 118a-118n for processing, load balancing and minimizing response time. The router agents 116a-116m may further be configured to implement failover mechanisms, redirecting queries to alternative functional agents in the event of a failure or overload.
[0032] The one or more functional agents 118a-118n may be configured to perform domain-specific processing of the received queries. The functional agents 118a-118n may be Human Domain expert Agents that may be capable of integrating specialized expert models and AI for processing of the received queries. The functional agents 118a-118n may include action agents and analysis agents. The action agents may be configured to do the execution and analysis agents may be configured to analyze the data and help in decision making. These functional agents 118a-118n may be controlled by the one or more router agents 116a-116m, according to an embodiment of the present invention. Each of the functional agents 118a-118n may be specialized for handling specific query types, such as text analysis, image recognition, predictive modeling, or sentiment analysis. The functional agents 118a-118n may leverage artificial intelligence (AI) algorithms, natural language processing (NLP) models, and machine learning frameworks to generate relevant and contextualized responses.
[0033] The linking agent 120 may be configured to facilitate seamless integration between the system 100 and external resources. The linking agent 120 may establish secure connections with third-party platforms 122, enabling data retrieval, API-based interactions, or external content aggregation. In some embodiments, the linking agent 120 may support real-time synchronization with cloud-based services, external databases, or enterprise resource planning (ERP) systems.
[0034] The third-party platforms 122 may include external service providers, data repositories, or software-as-a-service (SaaS) applications that the system 100 may communicate with for enhanced functionality. The third-party platforms 122 may include social media platforms, advertising networks, analytics tools, or financial transaction gateways. The system 100 may utilize APIs, webhooks, or middleware interfaces to exchange data with third-party platforms 122 securely.
[0035] The graphical user interface (GUI) 124 may be configured to provide an interactive platform for users to input queries, view generated responses, and customize system settings. The GUI 124 may be designed with a user-friendly layout, supporting multiple interaction modes such as text input, voice commands, or gesture-based controls. In some embodiments, the GUI 124 may include dashboards, analytics visualization, and real-time feedback mechanisms to enhance user engagement.
[0036] The database 126 may be configured to store and manage structured and unstructured data used by the system 100. The database 126 may include relational database management systems (RDBMS), NoSQL databases, or distributed storage architectures. It may store query logs, response history, machine learning models, user preferences, and metadata required for processing. The database 126 may support indexing and caching mechanisms to optimize query retrieval speeds.
[0037] The communication network 128 may be configured to facilitate data exchange between various components of the system 100. The communication network 128 may include wired or wireless networks, such as the internet, local area networks (LAN), wide area networks (WAN), or 5G-enabled mobile networks. The communication network 128 may implement encryption protocols, secure socket layer (SSL) certificates, or virtual private network (VPN) tunneling to ensure secure transmission of sensitive data.
[0038] In some embodiments, the system 100 may implement a distributed architecture in which multiple instances of the functional agents 118a-118n may be operated in parallel to handle large-scale query processing efficiently. The integration of a linking agent 120, third-party platforms 122, and the database 126 may further enhance the adaptability and scalability of the system 100.
[0039] For example, the system 100 may be configured to generate responses for a large-scale advertisement campaign management. In this exemplary scenario, multiple users may provide queries through different client devices 102 to generate targeted advertisements for various demographics. The system 100 may be designed to allow the gating agent 114 to authenticate each request efficiently. The system 100 may be configured to facilitate seamless data exchange through the communication network 128 between the router agents 116a-116m and the functional agents 118a-118n by processing the queries in real-time. The system 100 may be further configured to store the generated advertisements in the database 126, which may be accessible via the GUI 124 for user review and modification before deployment across multiple channels.
[0040] FIG. 2 illustrates the docker environment 110, according to an embodiment of the present invention. The docker environment 110 may be configured to generate the hyper-personalized responses based on specific user queries or inputs. The gating agent 114 may be configured to determine whether the received query requires single-agent processing or multi-agent parallel execution based on predefined heuristics or machine learning models. The gating agent 114 may analyze the metadata of the received query, assess computational requirements, and/or allocate processing resources accordingly.
[0041] In an embodiment of the present invention, the gating agent 114 may be configured to analyze the attributes of a received query and determine a routing mechanism to route the query accordingly. The routing mechanism may be configured to select parallel processing for the query using one or more router agents 116a-116m. The gating agent 114 may ensure that each query is optimally assigned for processing by evaluating factors such as query complexity, user intent, and system load conditions.
[0042] The one or more router agents 116a-116m may be configured to dynamically allocate queries to the available functional agents 118a-118n based on predefined routing logic. Each router agent 116 may evaluate incoming queries and determine the optimal functional agent for processing while ensuring load balancing and minimizing response time. The router agents 116a-116m may further be configured to implement failover mechanisms, redirecting queries to alternative functional agents 118a-118n in the event of a failure or system overload.
[0043] In an embodiment of the present invention, the system 100 may include multiple parallel router agents 116a-116m operating concurrently to manage query distribution efficiently. These parallel router agents 116a-116m may process queries independently and/or collaboratively by utilizing distributed computing techniques to optimize system performance. The parallel router agents 116a-116m may execute the routing mechanism by selecting parallel processing pathways and generating responses using one or more functional agents 118a-118n.
[0044] The router agents 116a-116m may leverage reinforcement learning-based feedback mechanisms to dynamically adjust routing paths, ensuring efficient processing of high-priority or complex queries. In scenarios where a query requires multiple stages of processing, the parallel router agents 116a-116m may coordinate task segmentation and distribute query components to specialized functional agents 118a-118n for concurrent execution.
[0045] In an embodiment of the present invention, the router agents 116a-116m may be configured to execute the routing mechanism by selecting parallel processing pathways and generating responses using one or more functional agents 118a-118n. The router agents 116a-116m may leverage reinforcement learning-based feedback mechanisms to dynamically adjust routing paths, ensuring efficient processing of high-priority or complex queries.
[0046] The one or more functional agents 118a-118n may be, but not limited to, a social action agent 200, an analytics agent 202, a performance action agent 204, a creative agent 206, a performance agent 208, a query agent 210, a reward engine 212, a search engine optimization agent 214, a validation agent 216, or a combination thereof. The one or more functional agents 118a-118n may be configured to work in parallel or sequentially, depending on the complexity and nature of the received query.
[0047] The social action agent 200 may be configured to execute external interactions, including posting on social media platforms, responding to user engagement, or modifying online content based on a User-ask and obtaining intelligence from the analytics agent 202. The social action agent 200 may be configured to integrate with third-party APIs for real-time audience targeting and campaign optimization. The analytics agent 202 may be configured to analyze received queries and process data using statistical models, machine learning algorithms, or external data sources.
[0048] The analytics agent 202 may perform at least one of descriptive, diagnostic, prescriptive, or predictive analysis to extract insights and trends relevant to the generated response. The performance action agent 204 may be configured to take actions related to advertising and marketing strategies, such as creating, modifying, or bidding advertisements on third-party platforms 122. The performance action agent 204 may utilize campaign performance metrics to adjust advertising parameters dynamically.
[0049] The creative agent 206 may be configured to generate text, images, videos, or multimedia content based on user queries. The creative agent 206 may employ natural language processing (NLP), computer vision models, and generative AI techniques to develop high-quality creative assets tailored to specific audiences. The performance agent 208 may be configured to evaluate and optimize the generated responses for effectiveness and impact. The performance agent 208 may track key performance indicators (KPIs), such as engagement rates, conversion metrics, and user interactions, to improve response quality. The query agent 210 may be configured to interpret and classify incoming queries, extract relevant keywords, and determine the appropriate processing pathway. The query agent 210 may utilize semantic analysis, knowledge graphs, and intent recognition models to enhance query understanding.
[0050] The reward engine 212 may be configured to evaluate the performance of the router agents 116a-116m based on user feedback, query resolution efficiency, and system-defined performance metrics. The reward engine 212 may assign rewards to router agents based on their effectiveness in routing queries to the most suitable functional agents 118a-118n, optimizing processing time, and ensuring accurate responses. The reward engine 212 may categorize router agents as best-performing or underperforming based on accumulated rewards, dynamically adjusting their routing priorities. Poorly performing router agents may receive corrective adjustments, including revised routing logic, retraining, or reduced task assignments, ensuring continuous optimization of query processing. By leveraging reinforcement mechanisms, the reward engine 212 may enhance the adaptability and efficiency of the system 100, promoting intelligent decision-making and improving overall system performance.
[0051] The search engine optimization (SEO) agent 214 may be configured to enhance the visibility and ranking of generated responses on search engines. The SEO agent 214 may analyze keyword relevance, optimize metadata, and recommend structural improvements to improve search engine discoverability. The validation agent 216 may be configured to validate the generated responses from the functional agents 118a-118n for accuracy, coherence, and completeness. In an embodiment of the present invention, the validation agent 216 may be configured to request reprocessing from the respective functional agents if any response is incomplete or inconsistent. The validation agent 216 may apply rule-based validation techniques, machine learning-based fact-checking, and consistency checks before finalizing responses.
[0052] The orchestration engine 218 may be configured to manage the workflow and coordination of the multiple agents within the system 100. The orchestration engine may dynamically allocate computational resources, prioritize tasks, and optimize execution sequences for efficient query processing. The orchestration engine may utilize the LLM for generating the final response based on the generated responses by the one or more functional agents. The generated final response may be textual, may carry out a technical change in a database, a search engine, a printer, a display unit, a cloud storage system, an API, a network configuration, a robotic process automation system, an IoT device, a blockchain ledger, a virtual assistant, a customer relationship management (CRM) system, an enterprise resource planning (ERP) system, a cybersecurity module, an autonomous vehicle control system, an edge computing node, a smart contract execution system, an AI-based recommendation engine, an augmented reality interface, a digital twin system, a predictive maintenance system, or a combination thereof.
[0053] The technical change executed by the system 100 may be applied to solve real-world problems, such as automating marketing campaigns, optimizing advertisement strategies, improving business operations, enhancing customer engagement, increasing search engine rankings, enabling personalized recommendations, streamlining supply chain management, improving cybersecurity measures, optimizing resource allocation in cloud computing, assisting in medical diagnostics, facilitating predictive maintenance in industrial setups, or automating data-driven decision-making in enterprises. The system 100 ensures that these solutions are dynamically tailored based on real-time data inputs, contextual analysis, and adaptive learning, thereby enhancing efficiency and effectiveness across various domains.
[0054] The feedback module 220 may be configured to collect and process user feedback on the generated responses. The feedback module 220 may use reinforcement learning mechanisms to receive user feedback on the final response.
[0055] The feedback module 220 further be configured to update routing and agent selection mechanisms using the reinforcement learning to improve future response quality. In an embodiment, the feedback module 220 may be configured to dynamically adjust query routing by updating the weighting factors associated with the router agents 116a-116m based on past feedback data. The feedback module 220 may analyze response accuracy, coherence, and relevance to determine optimal functional agent 118a-118n assignments for similar future queries. In some embodiments, the feedback module 220 may be configured to communicate with the validation agent 216 to cross-check user feedback against system-generated accuracy metrics. If discrepancies are detected, the feedback module 220 may trigger automated retraining of underlying models within the generative AI tool 222 to enhance contextual understanding and reduce response errors.
[0056] The feedback module 220 may further be configured to store historical user interactions in the database 126, allowing the system 100 to personalize future responses based on past engagements. The feedback module 220 may leverage natural language processing (NLP) techniques to classify user sentiment and intent, enabling the system 100 to dynamically tailor content recommendations and engagement strategies. In an embodiment, the feedback module 220 may be configured to work in conjunction with the orchestration engine 218 to rank, summarize, and optimize generated responses before delivering them to the end user. The feedback module 220 may analyze engagement metrics, such as click-through rates, time spent on responses, and follow-up interactions, to determine the effectiveness of generated content. The feedback module 220 may also implement adaptive learning mechanisms, allowing the system 100 to refine routing logic, functional agent selection, and response structuring over time. By continuously analyzing feedback data, the feedback module 220 may enhance system performance and ensure higher user satisfaction. The generative AI tool 222 may be configured to leverage artificial intelligence models, including transformer-based architectures, to create text-based, visual, or multimedia content. The generative AI tool 222 may generate personalized content in response to user queries while considering context, style, and engagement factors. The generative AI tools may be, but not limited to, ChatGPT, LLaMA, Gemini, Claude, Mistral, or other transformer-based models capable of generating high-quality content. These tools may be configured to analyze user intent, adapt to various contextual scenarios, and optimize content generation for engagement, coherence, and personalization.
[0057] In an exemplary embodiments of the present invention, the system 100 (as shown in FIG. 1) may improve the search engine optimization of a website by leveraging the search engine optimization agent 214 in coordination with the analytics agent 202, performance agent 208, generative tool 222, and router agents 116a-116m. The analytics agent 202 may be configured to analyze website performance metrics, including page load time, bounce rate, and keyword rankings, while identifying optimization areas that may enhance search visibility. The search engine optimization agent 214 may determine underperforming keywords, refine meta tags, and implement structured data markup to improve discoverability. The performance agent 208 may optimize technical SEO factors, such as website speed, mobile responsiveness, and caching strategies, ensuring seamless user experience.
[0058] The generative tool 222 may generate SEO-optimized content, including blogs, FAQs, and landing pages, based on evolving search trends and user intent. The query agent 210 may further assess search queries to align content with specific user needs. The router agents 116a-116m may dynamically allocate tasks to functional agents, enabling parallel processing for efficient execution of SEO enhancements while optimizing resource utilization. The validation agent 216 may be configured to validate the response and changes made by the system 100 before finalizing and deploying them.
[0059] The validation agent 216 may assess the generated responses from the one or more functional agents 118a-118n for accuracy, coherence, and completeness, ensuring that they align with predefined quality parameters. In the context of search engine optimization, the validation agent 216 may verify whether the updated meta tags, structured data, and content optimizations comply with search engine guidelines and best practices.
[0060] The validation agent 216 may further cross-check the impact of implemented SEO enhancements by analyzing real-time data metrics, such as click-through rates, keyword performance, and page indexing status. If discrepancies or inefficiencies are detected, the validation agent 216 may request reprocessing from the respective functional agents, ensuring that the final output meets performance and compliance standards before execution. The feedback module 220 may collect real-time user engagement data and use reinforcement learning mechanisms to refine SEO strategies, ensuring adaptability to search engine algorithm updates. By integrating validation, adaptive feedback, and intelligent routing, the system 100 may continuously monitor and adjust optimization efforts, thereby enhancing organic traffic, improving search rankings, and increasing conversion rates over time.
[0061] FIG. 3 illustrates a flowchart a method 300 for generating the hyper-personalized responses, according to an embodiment of the present invention.
[0062] At step 302, the system 300 may be configured to receive a query from the input unit 104 using the query receiving interface 108. The input unit 104 may be a user device, an application interface, or an external system transmitting the query to the system 300 for processing.
[0063] At step 304, the system 300 may be configured to analyze attributes of the received query using the gating agent 114. The gating agent 114 may perform authentication, authorization, and preliminary classification of the query based on its type, complexity, and intent.
[0064] At step 306, the system 300 may be configured to determine a routing mechanism to route the query based on the analyzed attributes. The routing mechanism may be configured to select parallel processing paths for the query using the one or more router agents 116a-116m.
[0065] At step 308, the system 300 may be configured to generate responses using the one or more functional agents 118a-118n based on the selected parallel processing paths determined by the router agents 116a-116m.
[0066] At step 310, the system 300 may be configured to orchestrate the generated responses using the linking agent 120. The linking agent 120 may aggregate, refine, and structure the responses generated by the functional agents 118a-118n, ensuring coherence, logical flow, and completeness of the final output. The linking agent 120 may also eliminate redundant or conflicting information to enhance the quality of the response.
[0067] At step 312, the system 300 may be configured to generate the final response based on the orchestrated responses. The final response may be formatted, structured, and optimized for the intended recipient, ensuring relevance, clarity, and actionable insights. The final response may then be transmitted to the client device 102 for user access. The user may view the final response through a web-based dashboard, a mobile application, an API interface, and/or an integrated third-party system. The feedback module 220 may allow users to rate the final response, provide comments, or request refinements to enable a continuous optimization of system outputs. Additionally, the validation agent 216 may perform a final quality check before displaying the response for ensuring compliance with predefined accuracy and consistency standards.
[0068] FIG. 4 illustrates a method for generating hyper-personalized responses by incorporating the user feedbacks.
[0069] At step 402, the system 100 may be configured to receive a user query. At step 404, the system 100 may be configured to process the received query using the gating agent 114 to determine the attributes of the query. At step 406, the system 100 may be configured to route the query to the appropriate router agents 116a-116m based on the analyzed attributes. At step 408, the system 100 may be configured to determine whether the query requires analytical processing. At step 410, if analytical processing is required, the system 100 may be configured to process the query using the analytics agent 202. At step 412, if analytical processing is not required, the system 100 may be configured to process the query using the action agents such as the social action agent 200, or the performance action agent 204. At step 414, the system 100 may be configured to link and refine the generated response using the linking agent 120. At step 416, the system 100 may be configured to generate the final response and transmit it to the client device 102 for the user access. At step 418, the system 100 may be configured to receive user feedback on the final response and update the reward engine 212 accordingly to enhance future response accuracy and personalization.
[0070] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. ,CLAIMS:CLAIMS
I Claim:
1. A system (100) for delivering hyper-personalized responses, comprising:
a query receiving interface (108) configured to receive a query from an input unit (104) of a client device (102);
a processing unit (112) installed on a Docker environment (110) coupled to the query receiving interface (108), and configured to:
analyze attributes of the received query using a gating agent (114);
determine a routing mechanism to route the query based on the analyzed attributes, wherein the routing mechanism is configured to select parallel processings on the query using one or more router agents (116a-116m);
generate responses using one or more functional agents (118a-118n) based on the selected parallel processings on the query using the one or more router agents (116a-116m), wherein the one or more functional agents (118a-118n) are selected from a social action agent (200), an analytics agent (202), a creative agent (206), a performance agent (208), a query agent (210), a validation agent (216), a performance action agent (204), a search engine optimization agent (214), or a combination thereof;
orchestrate the generated responses using a linking agent (120);
generating a final response based on the orchestrated responses; and
continuously learning and improving future responses based on User feedback or interactions upon generation of the final response for the received query.
2. The system (100) as claimed in claim 1, wherein the gating agent (114) is configured to determine whether the received query requires single-agent processing or multi-agent parallel execution based on predefined heuristics or machine learning models.
3. The system (100) as claimed in claim 1, wherein the one or more router agents (116a-116m) are configured to:
allocate queries to multiple functional agents (118a-118n) based on query type and complexity; and
dynamically adjust routing paths using reinforcement learning-based feedback mechanisms.
4. The system (100) as claimed in claim 1, wherein the analytics agent (202) is configured to perform at least one of descriptive, diagnostic, prescriptive, or predictive analysis using statistical models, machine learning algorithms, or external data sources.
5. The system (100) as claimed in claim 1, wherein the Social action agent (200) is configured to execute external interactions, including posting on social media platforms, responding to user engagement, or modifying online content based on a user-ask and getting intelligence from the Analytics agent (202).
6. The system (100) as claimed in claim 1, wherein the Validation agent (216) is configured to validate the generated responses from the one or more functional agents for accuracy, completeness, and coherence; and request reprocessing from the respective functional agents if any response is incomplete or inconsistent.
7. The system (100) as claimed in claim 1, wherein the Linking agent (120) is configured to rank, summarize, and optimize the generated responses from an Orchestration Engine (218).
8. The system (100) as claimed in claim 1, wherein the Performance action agent (204) is configured to take actions selected from creating, modifying, or bidding advertisements on third-party platforms (122).
9. The system (100) as claimed in claim 1, comprising a feedback module (220) configured to:
receive user feedback on the final response; and
update routing and agent selection mechanisms using reinforcement learning to improve future response quality.
10. A method (300) for delivering hyper-personalized responses, comprising:
receiving a query from an input unit (104) using a query receiving interface (108);
analyzing attributes of the received query using a gating agent (114);
determining a routing mechanism to route the query based on the analyzed attributes, wherein the routing mechanism is configured to select parallel processings on the query using one or more router agents (116a-116m);
generating responses using one or more functional agents (118a-118n) based on the selected parallel processings on the query using the one or more router agents (116a-116m), wherein the one or more functional agents (118a-118n) are selected from a social action agent (200), an analytics agent (202), a creative agent (206), a performance agent (208), a query agent (210), a validation agent (216), a performance action agent (204), a search engine optimization agent (214), or a combination thereof;
orchestrating the generated responses using a linking agent (120); and
generating a final response based on the orchestrated responses.
| # | Name | Date |
|---|---|---|
| 1 | 202421013859-PROVISIONAL SPECIFICATION [26-02-2024(online)].pdf | 2024-02-26 |
| 2 | 202421013859-PROOF OF RIGHT [26-02-2024(online)].pdf | 2024-02-26 |
| 3 | 202421013859-FORM FOR STARTUP [26-02-2024(online)].pdf | 2024-02-26 |
| 4 | 202421013859-FORM FOR SMALL ENTITY(FORM-28) [26-02-2024(online)].pdf | 2024-02-26 |
| 5 | 202421013859-FORM 1 [26-02-2024(online)].pdf | 2024-02-26 |
| 6 | 202421013859-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-02-2024(online)].pdf | 2024-02-26 |
| 7 | 202421013859-EVIDENCE FOR REGISTRATION UNDER SSI [26-02-2024(online)].pdf | 2024-02-26 |
| 8 | 202421013859-DRAWINGS [26-02-2024(online)].pdf | 2024-02-26 |
| 9 | 202421013859-FORM-26 [26-02-2025(online)].pdf | 2025-02-26 |
| 10 | 202421013859-DRAWING [26-02-2025(online)].pdf | 2025-02-26 |
| 11 | 202421013859-COMPLETE SPECIFICATION [26-02-2025(online)].pdf | 2025-02-26 |
| 12 | Abstract.jpg | 2025-04-16 |