Abstract: ABSTRACT TITLE: SYSTEM AND METHOD OF EFFICIENT CONTEXT QUEUE FOR MANAGING COMMUNICATION WORKFLOWS A system and method for managing communication workflows for clients (600), utilizing a dynamic context queue to optimize intent registration, event management, and task execution in distributed environments; wherein the system (10) comprises a user (100), an input unit (200), a server (300), a processing unit (400), an output unit (500) and client (600), wherein the processing unit (400) further comprises of a context queue module (410) , and a large language model (LLM) integration unit (420) for natural language processing, and a clustering unit for scalability and fault tolerance and a workflow (430) that involves dynamically registering intents in natural language by servers (300), enabling clients (600) to subscribe to these intents, and managing events to ensure no duplication and timely expiration. The system (10) allows seamless communication between clients (600) and servers (300), automatically adjusting to evolving workflows (430) and enabling efficient collaboration. It leverages generative AI models to structure and refine intents in real-time, enhancing scalability, adaptability, and performance in complex, distributed systems.
Description:FIELD OF INVENTION
The present invention relates to a system and method for managing communication workflows between Clients and servers. More particularly, the present invention relates to a system and method of efficient context queue for managing communication workflows; for dynamically identifying, registering, and managing intents and events in real-time workflows, utilizing LLMs for natural language processing and clustering techniques for scalability and fault tolerance, to facilitate autonomous and efficient operations of Clients in distributed environments.
BACKGROUND
The increasing adoption of generative AI systems in various industries has driven the need for efficient and scalable communication frameworks to manage interactions between Clients and servers. These systems rely on dynamically generated workflows and context-sensitive operations to perform tasks autonomously. However, existing approaches to managing communication and task distribution often depend on static workflows, predefined keywords, or rigid schemas. These limitations hinder scalability, adaptability, and real-time decision-making, particularly in distributed and dynamic environments.
Traditional workflows for managing AI agent workflows lack a systematic approach to dynamically register and process intents in natural language. They often fail to address critical challenges such as handling event duplication, prioritizing time-sensitive tasks, and ensuring fault tolerance in clustered environments. Moreover, these systems do not effectively leverage advanced generative AI technologies, such as large language models, to extract, structure, and refine intents in real time.
As AI-driven systems grow more complex, there is a pressing need for robust, context-aware architectures that facilitate seamless collaboration among clients, enable dynamic intent registration, and manage events efficiently while ensuring scalability and reliability across distributed systems.
PRIOR ART
US10706450B1 discloses a system for managing workflows in distributed AI systems through dynamic task allocation and agent interactions. While this system facilitates improved communication between Clients, it lacks a context-aware queue architecture that supports dynamic intent registration and real-time event management, key features of the present invention that enable seamless integration of Clients into agentic workflows.
US20240256598A1 describes a workflow for event-driven workflows in AI systems, focusing on centralized event processing and distribution. Although effective in managing events, this system does not incorporate natural language-driven intent registration or lifecycle management features like deduplication and expiration, areas where the present invention provides a more robust and scalable solution.
US11373045B2 outlines a framework for handling clustered environments in distributed systems, emphasizing load balancing and failover mechanisms. While it enhances reliability, it does not address the dynamic subscription of intents or the role of large language models in refining context-sensitive workflows, innovations introduced by the present invention to elevate adaptability and precision.
The present invention addresses these challenges by introducing an innovative system and method for managing workflows using a dynamic context-aware queue architecture.
DEFINITIONS:
The “system” used hereinafter in this specification refers to the entire framework or architecture within which components like input units, servers, clients, and processing units interact. It is responsible for managing the flow of data and tasks, ensuring that the entire process of receiving input, processing it, and delivering output occurs efficiently.
The “user” used hereinafter in this specification refers to the individual or entity that interacts with the system. The user's actions or requests are the driving force behind the inputs provided to the system. The user might provide commands, tasks, or other types of data for the system to process and respond to.
The expression “input unit” used hereinafter in this specification refers to, but is not limited to, mobile, laptops, computers, PCs, keyboards, mouse, pen drives or drives.
The “server” used hereinafter in this specification is a central component of the system that processes tasks, generates events, and communicates with the context queue. It is responsible for registering intents and providing the necessary resources to execute tasks. Servers may manage resources like memory, computation power, and storage and deliver services to clients.
The “processing unit” used hereinafter in this specification refers to the computational hardware or software responsible for executing tasks and performing computations. It could include CPUs, GPUs, or cloud-based infrastructure. This unit handles the core analysis, decision-making, scoring, and generation of context graphs, based on data and inputs from the user and other components.
The “output unit” used hereinafter in this specification refers to, but is not limited to, an onboard output device, a user interface (UI), a display kit, a local display, a screen, a dashboard, or a visualization platform enabling the user to visualize, observe or analyse any data or scores provided by the system; whereby it is responsible for delivering results, feedback, or responses from the system to the user or other external systems.
The “clients” used hereinafter in this specification refers to entities or AI agents that interact with the system by subscribing to intents in the context queue and processing related events. They perform specific tasks based on these events and may operate autonomously or in coordination with other system components.
The “context queue module” used hereinafter in this specification refers to a dynamic data structure that acts as a central hub for managing and distributing tasks, events, and intents between servers and clients. It prioritizes, stores, and routes data efficiently to ensure that the system can handle requests and tasks in an organized manner, ensuring scalability and fault tolerance.
The “Large Language Models (LLMs)” used hereinafter in this specification refers to generative AI models designed to process, understand, and generate natural language. They are used to extract meaning from text, structure data, and generate refined intents or events based on user inputs or other sources. LLMs form the backbone of tasks involving natural language understanding and processing within the system.
A “workflow” used hereinafter in this specification refers to a method or set of procedures or steps that are carried out within the system to handle tasks, manage communication between components, and ensure efficient operation. This includes processes like registering tasks, interpreting intents, and ensuring the correct flow of data through the context queue, servers, and clients.
OBJECTS OF THE INVENTION:
The primary object of the present invention is to provide a system and method of efficient context queue for managing communication workflows for Clients in distributed environments.
Yet another object of the present invention is to provide a dynamic registration mechanism for intents, allowing servers to register and update tasks or features in natural language without relying on predefined schemas.
Yet another object of the present invention is to enable Clients to subscribe to intents in a natural language format, enhancing flexibility and adaptability in handling diverse workflows and tasks.
Yet another object of the present invention is to manage events within the context queue, ensuring no duplication, handling expiration, and supporting clustering for scalability and fault tolerance.
Further, the object of the present invention is to facilitate seamless integration of large language models (LLMs) to identify and structure intents dynamically, ensuring context-aware processing of tasks in real-time workflows.
SUMMARY
Before the present invention is described, it is to be understood that the present invention is not limited to specific methodologies and materials described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention.
The invention relates to a system and method for efficiently managing communication workflows between servers and clients using advanced technologies such as dynamic context queues, large language models (LLMs), and clustering mechanisms. The system overcomes limitations of traditional communication frameworks, which rely on static schemas, predefined keywords, or rigid workflows, making them unsuitable for scalable, dynamic, and real-time applications.
The system includes components such as a server module for intent registration, a client module for subscribing to intents, a context queue for event management, an integration module with LLMs for natural language processing, and clustering mechanisms for scalability and fault tolerance.
The system dynamically registers intents in natural language using LLMs, enabling servers to define tasks flexibly. Clients subscribe to these intents, and the context queue ensures efficient delivery of events by managing deduplication, expiration, and prioritization. The system also supports clustered environments to enhance fault tolerance and load distribution, ensuring high availability in distributed operations.
Thus, the invention addresses the challenges of managing real-time, dynamic workflows in distributed AI systems by providing a robust, context-aware communication framework. It facilitates efficient collaboration between servers and clients, enabling scalable, autonomous operations. This approach significantly enhances the adaptability, efficiency, and reliability of AI-driven communication workflows in complex environments.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 shows an overview of the system of the present invention.
FIG. 2 shows a detailed workflow of the workflow employed by the system of the present system.
DETAILED DESCRIPTION OF INVENTION:
Before the present invention is described, it is to be understood that this invention is not limited to workflowologies described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention. Throughout this specification, the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps. The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the invention to achieve one or more of the desired objects or results. Various embodiments of the present invention are described below. It is, however, noted that the present invention is not limited to these embodiments, but rather the intention is that modifications that are apparent are also included.
To understand the invention clearly, the various components of the system are referred as below:
No. Component
10 System
100 User
200 Input unit
300 Server
400 Processing unit
500 Output unit
600 Clients
410 Context Queue module
420 Large Language Model (LLMs)
430 workflow
The present invention is directed to a system and method of efficient context queue for managing communication workflows for Clients (600) in distributed environments and managing communication workflows for Clients(600) using a dynamic context queue, wherein the system (10) comprises a user (100), an input unit (200), a server (300), a processing unit (400), an output unit (500) and client (600) such that the processing unit further comprises of a context queue module (410) , and a large language model (LLM) integration unit (420); whereby the said system (10) employs a workflow (430) for managing communications between the servers and clients. The system (10) employs a workflow (430) comprising the steps of dynamic intent registration, Client subscription, event management, clustering for scalability, and Client workflow integration. The system (10) operates to streamline client collaboration by dynamically managing events, ensuring no duplication, and facilitating real-time task execution within distributed environments.
According to a preferred embodiment, the server (300) calls the context queue (410) API to register its supported intents, which are extracted through a call to the LLM (420) API. The LLM (420) processes natural language data, allowing the server (300) to register intents without predefined schemas. Once extracted, the intent definitions are stored in the context queue, providing a central repository for dynamic intent management.
According to another embodiment, the client subscription allows clients (600) to subscribe to the registered intents in natural language such that the subscription mechanism enables multiple clients (600) to participate in workflows based on their specific roles or expertise.
According to yet another embodiment, the event management ensures that events produced by the server are delivered to the queue, and the queue subsequently delivers these events to the subscribed clients (600). The event management process includes techniques for deduplication, expiration, and prioritization to ensure timely and reliable processing.
According to a further embodiment, clustering for scalability involves the deployment of the context queue in a clustered environment to ensure high availability and load distribution across multiple nodes. This step enhances fault tolerance and supports dynamic scaling based on workload demands.
In yet another embodiment, the client workflow integration serves as the backbone, allowing clients to operate autonomously based on dynamically defined intents and task instructions, thus improving the efficiency and effectiveness of real-time collaboration.
The said system (10) enables seamless communication and task execution among clients (600) by leveraging advanced generative AI techniques for dynamic intent management and event processing, facilitating scalable and efficient system implemented workflows (430).
In another preferred embodiment of the invention, a workflow for managing communication between servers and clients(600) using a dynamic context queue. The workflow (430) comprises the following steps:
1. Server (300) calls context queue API to register intent:
The server (300), acting as the producer, interacts with the context queue to register the intents it supports, where the intent represents tasks or features that the server (300) is capable of executing. For instance, a server (300) in an e-commerce environment may register intents such as "Process Orders" or "Handle Customer Queries."
2. Context queue calls generative AI API to extract intents:
Upon receiving the registration request, the context queue interfaces with the generative AI API. The generative AI dynamically extracts and structures intents in natural language format, thereby eliminating the need for static keywords or predefined definitions thereby ensuring that theintents are adaptable and understandable for various use cases.
3. Intent definitions are stored in the queue:
The context queue stores the extracted intent definitions, allowing the server (300) to track the available tasks and features. These definitions are maintained in a manner that ensures they are accessible for Clients(600) to subscribe to.
4. Clients (600) subscribe to intents:
Clients (600) subscribe to the intents they are capable of processing. For example, clients designed for customer support may subscribe to the "Handle Customer Queries" intent. These subscriptions are dynamic, allowing multiple Clients(600) to subscribe to the same intent.
5. Server (300) sends events to the queue:
Once the server (300) has registered its intents, it can send events related to these intents to the context queue. These events represent real-time tasks or actions that require processing by the clients (600). For instance, if an event occurs where a new customer query needs attention, the server (300) sends this event to the queue for processing.
6. Queue delivers events to subscribed clients (600):
The context queue ensures that events are delivered to the clients (600) that have subscribed to the relevant intents. The queue guarantees event management, ensuring no duplication, and applying expiration rules to events if they remain unprocessed within the defined timeframe. This process allows for efficient management and processing of tasks in accordance with set priorities.
The present invention introduces an innovative approach to managing communication workflows for Clients (600) through an efficient context queue system (10). By integrating advanced mechanisms such as dynamic intent registration, interaction with generative AI models, and real-time event management, the invention optimizes the interaction between servers (300) and Clients (600). The context queue (410) ensures seamless delivery of events to subscribed Clients (600) while maintaining scalability and fault tolerance. Moreover, the system’s (10) ability to dynamically handle intents and events without relying on predefined workflows (430) allows for flexible, context-aware decision-making. This novel workflow (430) significantly improves the efficiency of generative AI-driven workflows (430), reducing complexity and enhancing the system's (10) ability to adapt to various operational needs. By offering a centralized, dynamic solution for managing intents and events, the invention addresses the limitations of traditional workflows, providing a highly effective and scalable communication framework that boosts performance across distributed AI systems.
While considerable emphasis has been placed herein on the specific elements of the preferred embodiment, it will be appreciated that many alterations can be made and that many modifications can be made in preferred embodiment without departing from the principles of the invention. These and other changes in the preferred embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the invention and not as a limitation. , Claims:CLAIMS
We claim,
1. A system and method of efficient context queue for managing communication workflows for clients (600); wherein the system (10) comprises a user (100), an input unit (200), a server (300), a processing unit (400), an output unit (500) and client (600), wherein the processing unit (400) further comprises of a context queue module (410), and a large language model (LLM) integration unit (420) and workflow (430);
characterized in that:
the processing unit (400) of the system (10) employs a stepwise workflow (430) comprising the steps of:
a. calling context queue API to register intent (410), which enables initiating the system by having the server (300) register intents with the context queue, thus preparing the system (10) for event-driven workflows;
b. delivering events to subscribed clients (600), which ensures that the context queue sends events to clients that are subscribed to particular intents, ensuring dynamic communication;
c. calling a large language model (420) API to extract intents (430), which involves the context queue calling a large language model (LLM) (420) API to dynamically extract intents from natural language, facilitating the flexibility of intent definition;
d. sending events to the queue (440), which allows the server (300) to dispatch generated events into the queue for further processing and delivery to the relevant clients (600);
e. storing intent definitions in the queue (450), which enables the context queue to retain the intent definitions and event metadata for tracking, management, and handling across workflows;
f. subscribing clients (600) to intents (460), where Clients (600) subscribe to intents, thus aligning with the server's (300) workflows (430) and receiving relevant events based on their subscriptions.
2. The system and method as claimed in claim 1, wherein the context queue (420) acts as the central hub, managing the dynamic registration of intents and event distribution to clients (600) based on their subscriptions.
3. The system and method as claimed in claim 1, wherein the server (300) is capable of invoking context queue (410) APIs for registering multiple intents, allowing the system (10) to scale with diverse and evolving workflows.
4. The system and method as claimed in claim 1, wherein the context queue (410) ensures the efficient delivery of events without duplication, handling expiration policies for each event to ensure data integrity and timely processing; andsupports scalable and distributed architectures, ensuring high availability, load balancing, and fault tolerance for client interactions.
5. The system as claimed in claim 1, wherein the system (10) supports multi-client workflows, where clients (600) dynamically subscribe and unsubscribe from intents, allowing for flexibility and adaptation in real-time scenarios.
Dated this 6th day of January, 2025.
| # | Name | Date |
|---|---|---|
| 1 | 202521001045-STATEMENT OF UNDERTAKING (FORM 3) [06-01-2025(online)].pdf | 2025-01-06 |
| 2 | 202521001045-POWER OF AUTHORITY [06-01-2025(online)].pdf | 2025-01-06 |
| 3 | 202521001045-FORM 1 [06-01-2025(online)].pdf | 2025-01-06 |
| 4 | 202521001045-FIGURE OF ABSTRACT [06-01-2025(online)].pdf | 2025-01-06 |
| 5 | 202521001045-DRAWINGS [06-01-2025(online)].pdf | 2025-01-06 |
| 6 | 202521001045-DECLARATION OF INVENTORSHIP (FORM 5) [06-01-2025(online)].pdf | 2025-01-06 |
| 7 | 202521001045-COMPLETE SPECIFICATION [06-01-2025(online)].pdf | 2025-01-06 |
| 8 | Abstract1.jpg | 2025-02-21 |
| 9 | 202521001045-POA [22-02-2025(online)].pdf | 2025-02-22 |
| 10 | 202521001045-MARKED COPIES OF AMENDEMENTS [22-02-2025(online)].pdf | 2025-02-22 |
| 11 | 202521001045-FORM 13 [22-02-2025(online)].pdf | 2025-02-22 |
| 12 | 202521001045-AMMENDED DOCUMENTS [22-02-2025(online)].pdf | 2025-02-22 |
| 13 | 202521001045-FORM-9 [25-09-2025(online)].pdf | 2025-09-25 |
| 14 | 202521001045-FORM 18 [01-10-2025(online)].pdf | 2025-10-01 |