Abstract: ABSTRACT: Title: A SYSTEM AND METHOD FOR CONSTRUCTION OF WORKFLOW GRAPH FOR TOOL ORCHESTRATION IN LARGE LANGUAGE MODEL (LLM) OPERATIONS A system and method for construction of workflow graph for tool orchestration in large language model (LLM) operations, using deterministic tool invocation graph with recursive failover optimization (DTIG-RFO); wherein the system (100) comprises of an input device (102) associated with a display unit (103), set of tools (104) and metadata (105), and an output device (107) and a processing unit (106) employing a stepwise method (200); wherein the system receives user tasks in natural language, aligns them with tool metadata(105) and constraints, builds a fail-safe, stage-wise execution graph. the system maps each task phase—such as retrieval or analysis—to tools, sets up recursive failover chains, and ensures the graph is cycle-free using topological rules. The graph is further optimized using performance history, probabilistic scoring, and redundancy minimization to output a deterministic, directed acyclic graph (DAG) that ensures traceability, resilience, and efficiency in LLM workflows.
Description:FIELD OF INVENTION
The present invention relates to deterministic planning and orchestration of tools. More Particularly, it relates to a system and method for construction of workflow graph for tool orchestration in large language model (LLM) operations using Deterministic Tool Invocation Graph with Recursive Failover Optimization (DTIG-RFO).
BACKGROUND
Large language models (LLMs) are increasingly used in complex workflows that involve interacting with various tools and systems to accomplish tasks such as search, analysis, summarization, and response generation. LLM operations involve specialized practices that manage the lifecycle of LLMs to ensure efficient and consistent task execution. While LLMs are powerful tools for various tasks, their inherent probabilistic nature can make the process of finding a sequence of actions to complete a task challenging. Deterministic planning addresses this limitation by generating a predefined sequence of actions that consistently leads to the desired outcome.
In the existing LLM workflows, systems use tool selection and invocation during LLM operations using dynamic, runtime logic that is context or feedback driven. Such systems are inherently non-deterministic, making them difficult to audit, reproduce, or optimize. Further, execution failures, such as due to errors, timeouts, cost limits, or any other execution issue, are often managed reactively, without a standardized or traceable failover strategy. Thus, the current management of lifecycle of LLMs based on context or feedback are neither deterministic nor easily auditable. Moreover, existing failover strategies, which are intended to ensure continuity when a tool or process fails, are neither standardized nor traceable.
Prior Arts:
US11343328B2 describes a method for detecting the state of a communication session between workloads and initiating a failover based on network metrics when one workload becomes unavailable. It emphasizes monitoring the health of a network path and transitioning from a standby workload to an active role in a high-availability system.
US11315014B2 focuses on optimizing the execution of computational workflows using a deep neural network trained on provenance and resource monitoring data. It handles resource allocation among interdependent tasks in sub-workflows to meet user-defined performance metrics.
US20230259705A1 discloses methods for enhancing interactions with large language models (LLMs) through the use of structured, machine-readable representations of data, such as a universal language format, to provide enriched context and enable post-processing of LLM-generated outputs. The invention focuses on improving the quality of LLM continuations by inserting and analyzing context data before and after model invocation, aiming to refine the output text presented to users.
Whereas the first aforementioned prior art discloses a system that shares the broader theme of failover handling, the method is situated in the network infrastructure domain and does not involve tool orchestration or deterministic workflow planning. The second prior art shares the objective of workflow optimization, however, its approach is based on learned models and dynamic scheduling, whereas there is no deterministic and structured approach for LLM tool orchestration. Yet another prior art operates in the domain of LLMs but its emphasis is on semantic quality enhancement, representation techniques, and output conditioning. None of the prior arts address the technical layer distinctly so as to provide a deterministic, tool orchestration system that governs how an LLM can plan tool invocation paths in advance, embed multi-level failovers, and ensure reliability, observability, and optimization during complex task execution.
To overcome these drawbacks, there is a need for a novel, deterministic, tool orchestration system that can plan tool invocation paths in advance, embed multi-level failovers, and ensure reliability, observability, and optimization during complex task execution.
DEFINITIONS:
The expression “system” used hereinafter in this specification refers to an ecosystem comprising, but is not limited to a system with a user, input and output devices, processing unit, plurality of mobile devices, a display unit and output; and is extended to computing systems like mobile, laptops, computers, PCs, etc.
The expression “input unit” used hereinafter in this specification refers to, but is not limited to, mobile, laptops, computers, PCs, keyboards, mouse, pen drives or drives.
The expression “output unit” used hereinafter in this specification refers to, but is not limited to, an onboard output device, a user interface (UI), a display unit, a local display, a screen, a dashboard, or a visualization platform enabling the user to visualize the graphs provided as output by the system.
The expression “processing unit” refers to, but is not limited to, a processor of at least one computing device that optimizes the system, and acts as the functional unit of the system.
The expression “large language model (LLM)” used hereinafter in this specification refers to a type of machine learning model designed for natural language processing tasks such as language generation. LLMs are language models with many parameters, and are trained with self-supervised learning on a vast amount of text.
The expression “cycle-free graph” used hereinafter in this specification refer to a graph that contains no path starting and ending at the same vertex without repeating any node, ensuring an acyclic structure.
The expression “deterministic planning” used hereinafter in this specification refer to a type of planning where the outcome of each action is known with certainty, and the environment is predictable.
The expression “failover path” used hereinafter in this specification refer to a secondary execution route that is automatically activated when the primary tool or process fails, ensuring uninterrupted workflow completion.
The expression “full state traceability” used hereinafter in this specification refer to the ability to automatically switch to a backup or secondary LLM (or another system) when the primary one fails or becomes unavailable.
The expression “linked-list” used hereinafter in this specification refer to a linear data structure in which each element (node) contains a reference to the next node, allowing sequential traversal. In this invention, it represents chained failover steps.
The expression “nodes” used hereinafter in this specification refer to specific tasks or operations within the workflow; wherein they represent discrete units of operation (e.g., tool calls or decisions) within the workflow graph, such that each node executes a task or links to further failover options.
The expression “non-blocking execution” used hereinafter in this specification refers to the ability of the workflow system to continue task resolution by automatically invoking a failover tool when the primary tool fails, thereby avoiding interruptions or halts in execution.
The expression “predefined shortest path” used hereinafter in this specification refers to a calculated optimal route between two workflow stages or nodes, selected prior to execution using known metrics like latency or success rate.
The expression “recursive” used hereinafter in this specification refers to a process in which a function or structure repeatedly applies itself to its subcomponents, such as nested failover chains where each fallback can itself contain further fallbacks.
The expression “tool call” used hereinafter in this specification refers to the invocation of an external tool or service by the system to perform a specific function in the workflow, such as retrieval, translation, or summarization.
The expression “tool orchestration” used hereinafter in this specification refer to automating and managing complex workflows across multiple systems, applications, and services. It streamlines processes by coordinating tasks, ensuring they are executed in the correct sequence, and handling dependencies, ultimately improving efficiency and reducing manual effort.
The expression “what-if simulation” used hereinafter in this specification refer to means a technique used to explore the potential outcomes of different scenarios by changing input variables in a model or system.
The expression “terminal node” used hereinafter in this specification refer to the final step in the graph, signifying task completion, where the node resembles an instruction such as output, halt.
The expression “recursive failover optimization (RFO)” used hereinafter in this specification refers to a strategy that recursively builds backup paths for tool calls, up to a fixed depth, to ensure robustness.
The expression “directed acyclic graph (DAG)” used hereinafter in this specification refers to a graph with directed edges and no cycles, ensuring a one-way flow from start to end.
The expression “model context protocol (MCP)” used hereinafter in this specification refers to a framework that orchestrates the interaction between large language models, external tools, and modular agents to solve complex tasks through structured workflows; rather than relying on a single model perform each task, the MCP breaks down tasks into smaller, reusable modules—each responsible for a specific function—and coordinates how these modules communicate, delegate work, and exchange information.
OBJECTS OF THE INVENTION:
The primary object of the invention is to provide a system and method for construction of workflow graph for tool orchestration in LLM operations.
Another object of the invention is to construct a complete and deterministic graph which always ends in a terminal operation.
Yet another object of the invention is to incorporate innovative recursive-linked failover structures to ensure non-blocking execution.
Yet another object of the invention is to incorporate 10-level failover paths in the optimal workflow graph.
Yet another object of the invention is to represent the graph with full state traceability for each decision node.
Yet another object of the invention is to optimize workflow selection using shortest predefined path, performance history, and statistical inference.
SUMMARY
Before the present invention is described, it is to be understood that the present invention is not limited to specific methodologies and materials described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention.
The invention discloses a system and method for constructing a deterministic tool orchestration workflow graph, known as DTIG-RFO, for Large Language Model (LLM) operations; wherein the system comprises of an input device associated with a display unit, set of tools and metadata; a processing unit that employs a stepwise method/ workflow and an output device.
In a preferred aspect, the system receives user tasks in natural or structured language, aligns them with tool metadata and constraints, and builds a fail-safe, stage-wise execution graph. It maps each task phase—such as retrieval or analysis—to tools, sets up recursive failover chains up to 10 levels, and ensures the graph is cycle-free using topological rules. The graph is further optimized using performance history, probabilistic scoring, and redundancy minimization to output a deterministic, directed acyclic graph (DAG) that ensures traceability, resilience, and efficiency in LLM workflows.
In yet another preferred aspect, the system and method of the present invention improves conventional tool orchestration in LLM workflows by ensuring the creation of a complete, deterministic graph that reliably terminates, introducing a recursive failover structure with up to 10 fallback levels, enabling uninterrupted execution during tool failures, assigns weights to tool paths based on historical success or failure, using various tools to find the optimal execution path, thereby allowing the workflow to be optimized through a combination of shortest-path logic, performance history, and statistical inference.
BRIEF DESCRIPTION OF DRAWINGS
A complete understanding of the present invention may be made by reference to the following detailed description which is to be taken in conjugation with the accompanying drawing. The accompanying drawing, which is incorporated into and constitutes a part of the specification, illustrates one or more embodiments of the present invention and, together with the detailed description, it serves to explain the principles and implementations of the invention.
FIG.1. illustrates the structural and functional components of the system.
FIG.2. illustrates the schematic flow of the method employed by the processing unit.
FIG.3. illustrates a stepwise workflow followed by the system for construction of a workflow graph.
DETAILED DESCRIPTION OF INVENTION:
Before the present invention is described, it is to be understood that this invention is not limited to methodologies described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention. Throughout this specification, the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps. The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the invention to achieve one or more of the desired objects or results. Various embodiments of the present invention are described below. It is, however, noted that the present invention is not limited to these embodiments, but rather the intention is that modifications that are apparent are also included.
The present invention describes a system and method for construction of workflow graph for tool orchestration in large language model (LLM) operations using deterministic tool invocation graph with recursive failover optimization (DTIG-RFO); wherein the system (100) comprises of an input device (102) associated with a display unit (103), set of tools (104) and metadata (105); a processing unit (106) and an output device (107) wherein the processing unit (106) employs a stepwise method/ workflow (200). The said display unit (103) enables the user to enter the task description using a natural language or structured intent. The set of tools (104) include a plurality of tools (104) selected from T1…..Tn including, but not limited to a web-search tool, a calculator tool or a model context protocol (MCP) such that the MCP tools are building blocks that collectively power complex, multi-step systems through coordination and modularity, while the LLM tools enhance only a single model’s reasoning by enabling access to specialized functions; and the metadata (105) includes, but is not limited to latency, type, cost and/or success rate.
For example: An exemplary way to represent how the system uses model context protocol (MCP) is given as follows:
Imagine a user asks: "Summarize this PDF, translate it to Japanese, and email it to my team."
The model context protocol or MCP uses a document parser tool to extract text from the PDF, then it routes the content to an LLM-powered summarizer tool, further it passes the summary to a translator module, and finally call an email-sender tool with the final output. Each of these steps is handled by independent modules (or “tools”), coordinated intelligently by the MCP.
In a preferred embodiment of the invention, the method (200) for construction of workflow graph for tool orchestration in large language model (LLM) operations includes the steps as follows:
Step I: Providing inputs to the system (100):
- The user inserts a task description in a natural language or structured intent using at least one input device(102);
- The system (100) then aligns the set of available tools (104) (T1...Tn) with metadata including but is not limited to latency, type, cost, success rate and like; and user-defined constraints including, but not limited to cost cap, time bound or reliability priority.
Step II: Workflow graph construction employed by the processing unit (106):
- The system (100) performs a stage identification, wherein the tasks are parsed into ordered phases such as retrieve → analyze → respond, to identify logical stages.
- The system (100) then enables tool assignment; thereby mapping appropriate tools (104) to each task stage based on their capabilities and performance.
- The system (100) further enables graph creation; wherein it creates a primary tool node per stage of operation, whereof the system (100) attaches a sequence of failover nodes arranged in linked-list style chains, which follows a pattern as follows:
Primary Tool → Failover 1 → Failover 2 → … → Terminal Node,
thereby ensuring redundancy or fallback during tool execution failures; further ensuring a cycle-free graph using topological constraints that restricts the formation of a cycle or loop. Further, the system (100) enables a failover depth such that each failover chain can go up to 10 levels, until it reaches the terminal node that always end in a terminal operator like user prompt or halt.
- The system (100) further enables graph embedding, thereby representing the graph with full state traceability for each decision node.
For example:
Where a system is provided a task of building a graph (a network-like structure of nodes and connections) to represent tools and their failover paths. Then the system will take into consideration:
1. Primary tool node per stage; where a process is divided into stages (like Step 1, Step 2, etc.) such that for each stage, the system creates a primary tool node — this is the main tool meant to perform the task at that stage.
2. Failover nodes refer to the backup tools in cases where the primary tool might fail or not work as expected. To handle such failures, the system sets up failover nodes arranged in a chain, like a linked list; such as:
If tool 1 (primary tool) fails → try failover 1,
If failover node 1 fails → try failover 2, and so on until it reaches a terminal node at the end of the chain.
3. In this context, an orchestrator can be thought of as a linked list wherein each node represents a modular task such as summarization, retrieval, translation, or validation, and the orchestrator traverses this chain step-by-step, passing the output of one node as the input to the next referred to as a linked list where each node points to the next where the orchestrator maintains a dynamic, ordered sequence of operations such that each module knows what comes after the first operation and operates independently. This structure enables the system to be both flexible and extendable, such that the modules (nodes) can be inserted, removed, or reordered without breaking the overall flow, making the orchestrator a scalable and composable controller for building multi-step LLM workflows.
Step III: Workflow optimization using the optimization layer:
- The system (100) allows weight tool edges by past success/failure metrics thereby checking the historical performance of the workflow.
- The system (100) uses tools (104) (e.g. Dijkstra or A*) for optimizing a shortest path using pre-defined instructions.
- The system (100) then enables probabilistic scoring to combine statistical likelihoods of success to rank failovers.
- Finally, the system (100) allows redundancy minimization, so as to avoid reusing expensive or slow tools unless required.
Step IV: Output generation:
The system (100) enables a deterministic execution graph G (V, E) where:
o V represents nodes referring to tool call nodes with nested failovers,
o E represent the edges referring to execution orders or the transitions between stages,
such that all paths terminate at terminal node, to form a directed acyclic graph (DAG) with tool call sequences and nested failovers (linked-list style) at each node; and where the graph includes metadata for scoring and optimization. The said output graph is a representation of the final deterministic tool workflow, displayed to the user using at least one output device (107).
According to a next embodiment, the optimization layer analyses certain conditions and execute the workflow; such as:
recursive failover linked chains; wherein each tool node is wrapped in a chain structure extending to terminal node;
composite scoring for optimization; wherein a weighted, pluggable scoring function enables balancing cost, success, latency, and dynamic load;
failover-aware DAG traversal; wherein the path resolution always yields a terminal end, even through the nested fallback layer;
telemetry-driven adaptation; wherein the graphs adjust over time using feedback loops from tool performance logs;
graph export schema; wherein the output graph is serializable into a reusable spec for multiple agent systems;
“what-if simulation” mode; wherein the system simulates tool unavailability and recalculate alternate optimal paths using the same graph.
According to an embodiment, the system (100) and method (200) of the present invention is used to construct a deterministic, fail-safe tool orchestration graph for LLM workflows, wherein it emphasizes on failover chaining, deterministic resolution, and optimization using historical and real-time data. Further, the system (100) and method (200) may be used in workflows where LLM agents require high availability and deterministic recovery, or where compliance-focused workflows require full traceability, or in case of real-time system planners required for AI operations.
According to yet another embodiment, the present invention offers various advantages over conventional workflows in the domain of orchestration of tools in LLM operations. The invention ensures the construction of a complete and deterministic graph that always ends in a terminal operation. The invention incorporates a novel recursive-linked failover structure with up to 10 levels of failover paths, thereby enabling non-blocking execution under failure conditions. Furthermore, tool invocation paths are weighted based on past success and failure rates, and tools such as Dijkstra’s or A* are employed to determine the optimal execution route. This approach enables the invention to optimize workflow selection by leveraging predefined shortest paths, historical performance metrics, and statistical inference.
According to yet another embodiment, the invention provides full state traceability at each decision node within the orchestration graph. One of the significant advantages of the present invention is its ability to combine statistical likelihoods of success to rank failovers, thereby minimizing the reuse of expensive or slow tools unless required.
While considerable emphasis has been placed herein on the specific elements of the preferred embodiment, it will be appreciated that many alterations can be made and that many modifications can be made in preferred embodiment without departing from the principles of the invention. These and other changes in the preferred embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the invention and not as a limitation. , Claims:CLAIMS:
We claim,
1. A system and method for construction of workflow graph for tool orchestration in large language model (LLM) operations, using deterministic tool invocation graph with recursive failover optimization (DTIG-RFO);
wherein the system (100) comprises of an input device (102) associated with a display unit (103), set of tools (104) and metadata (105); a processing unit (106) and an output device (107) wherein the processing unit (106) employs a stepwise method (200);
characterised in that:
the method (200) for construction of workflow graph for tool orchestration includes the steps of;
providing inputs to the system (100); comprising
- adding a task description in a natural language or structured intent by the user (101) using at least one input device (102),
- aligning the set of available tools (T1...Tn) (104) with metadata (105) by the system (100),
- aligning the user-defined constraints including cost cap, time bound or reliability priority;
constructing workflow graph by the processing unit (106); comprising
- enabling stage identification, wherein the tasks are parsed into ordered phases such as retrieve, analyze and respond, to identify logical stages,
- enabling tool assignment for mapping appropriate tools (104) to each task stage based on their capabilities and performance,
- enabling graph creation, for creating a primary tool node per stage of operation, whereof the system (100) attaches a sequence of failover nodes arranged in linked-list style chains, which follows a pattern:
Primary Tool → Failover 1 → Failover 2 → … → Terminal Node, thereby ensuring redundancy or fallback during tool execution failures,
- ensuring a cycle-free graph using topological constraints thereby restricting the formation of a cycle or loop,
- enabling a failover depth such that each failover chain can go up to 10 levels, until it reaches the terminal node,
- enabling graph embedding, thereby representing the graph with full state traceability for each decision node;
optimizing workflow using the optimization layer, comprising
- allowing weight tool edges by past success/failure metrics thereby checking the historical performance of the workflow,
- using plurality of tools for optimizing a shortest path using pre-defined instructions,
- enabling probabilistic scoring to combine statistical likelihoods of success to rank failovers,
- allowing redundancy minimization, so as to avoid reusing expensive or slow tools unless required;
generating output, comprising
- enabling the system (100) to provide a deterministic execution graph G (V, E) where:
V represents nodes referring to tool call nodes with nested failovers,
E represent the edges referring to execution orders or transitions between stages, such that all paths terminate at terminal node, to form a directed acyclic graph (DAG) representing the final deterministic tool workflow, displayed to the user using at least one output device (107).
2. The system and method as claimed in claim 1, wherein the said display unit (103) enables the user to enter the task description using a natural language or structured intent.
3. The system and method as claimed in claim 1, wherein the set of tools (104) include a plurality of tools selected from T1…..Tn which includes a web-search tool, a calculator tool or a model context protocol (MCP) wherein the MCP tools are building blocks that collectively power complex, multi-step systems through coordination and modularity.
4. The system and method as claimed in claim 1; wherein the metadata (105) includes, but is not limited to latency, type, cost and/or success rate.
5. The system and method as claimed in claim 1, wherein the terminal node always ends in a terminal operator like user prompt or halt, to form a directed acyclic graph (DAG).
6. The system and method as claimed in claim 1, constructs a deterministic, fail-safe tool orchestration graph for LLM workflows, emphasizing on failover chaining, deterministic resolution, and optimization using historical and real-time data.
7. The system and method as claimed in claim 1, are used in workflows where LLM agents require high availability and deterministic recovery, where compliance-focused workflows require full traceability, or for real-time system planners required for AI operations.
Dated this 04th day of July, 2025.
| # | Name | Date |
|---|---|---|
| 1 | 202521064490-STATEMENT OF UNDERTAKING (FORM 3) [07-07-2025(online)].pdf | 2025-07-07 |
| 2 | 202521064490-POWER OF AUTHORITY [07-07-2025(online)].pdf | 2025-07-07 |
| 3 | 202521064490-FORM 1 [07-07-2025(online)].pdf | 2025-07-07 |
| 4 | 202521064490-FIGURE OF ABSTRACT [07-07-2025(online)].pdf | 2025-07-07 |
| 5 | 202521064490-DRAWINGS [07-07-2025(online)].pdf | 2025-07-07 |
| 6 | 202521064490-DECLARATION OF INVENTORSHIP (FORM 5) [07-07-2025(online)].pdf | 2025-07-07 |
| 7 | 202521064490-COMPLETE SPECIFICATION [07-07-2025(online)].pdf | 2025-07-07 |
| 8 | Abstract.jpg | 2025-07-29 |
| 9 | 202521064490-FORM-9 [26-09-2025(online)].pdf | 2025-09-26 |
| 10 | 202521064490-FORM 18 [01-10-2025(online)].pdf | 2025-10-01 |