Sign In to Follow Application
View All Documents & Correspondence

System And Method For Auto Prioritization Of User Stories Using Context Graph And Large Language Model

Abstract: A system and method for auto-prioritization of user stories using context-graph and large language models (LLM). The system (10) comprises an input unit (100), a processing unit (200) and an output unit (300). The processing unit comprises of data collection module (210) that systematically gathers and organizes data from multiple sources, including a code repository, project management tools, and documentation, ensuring compatibility with subsequent processing; the data expansion module (220) employs machine learning techniques to extract and generate detailed attributes; the context-graph construction module (230) creates a graph-based visualization of tasks and their interdependencies, dynamically updating relationships and hierarchies as new data is integrated; the scoring module (240) evaluates tasks based on their attributes, dependencies, and impact, assigning priority scores to guide decision-making; the prioritization module (250) organizes tasks into an optimized sequence based on their scores, generating actionable schedules and execution plans

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
06 January 2025
Publication Number
40/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Persistent Systems
Bhageerath, 402, Senapati Bapat Rd, Shivaji Cooperative Housing Society, Gokhale Nagar, Pune - 411016, Maharashtra, India

Inventors

1. Mr. Nitish Shrivastava
10764 Farallone Dr, Cupertino, CA 95014-4453, United States
2. Mr. Pradeepkumar Sharma
20200 Lucille Ave Apt 62 Cupertino CA 95014, United States

Specification

Description:FIELD OF INVENTION
The present invention relates to a system and method for auto-prioritization of user stories using context-graph and large language models (LLM). More Particularly, it focuses on leveraging context-graphs to map relationships and dependencies between tasks and using large language models (LLMs) to enhance task attributes and insights. ensuring efficient prioritization of user stories based on urgency, impact, and interdependencies.

BACKGROUND
Modern organizations rely on various hardware-driven systems and platforms, such as project management interfaces, data repositories, and integrated documentation tools, to manage user workflows and operational tasks effectively. These systems, while facilitating specific processes, often encounter challenges in identifying and addressing dependencies and relationships between interconnected tasks. The lack of a unified mechanism to understand these interdependencies hampers productivity and affects the ability to optimize operational workflows.
Traditionally, prioritizing tasks and workflows relied on manual intervention and human judgment. This process was time-intensive, inconsistent, and prone to errors, as it lacked a standardized approach to managing complex interrelations. Furthermore, the growing data influx and intricate hardware-based enterprise environments have highlighted the inadequacy of traditional methods in handling the dynamic and evolving needs of modern industries.

CN116821309B discloses methods for task dependency mapping through a hierarchical system of contextual tagging and relationship determination. While this system effectively categorizes tasks and highlights dependencies, it does not utilize advanced machine learning models, such as large language models (LLMs), to enrich raw data or integrate it into a context-graph for comprehensive prioritization. The present invention bridges this gap by employing LLMs and probabilistic scoring mechanisms to optimize workflows in diverse enterprise environments.

US20190050771A1 describes techniques for using rule-based algorithms to create interdependencies between tasks in project management workflows. Although this approach provides a structured framework for dependency resolution, it lacks the adaptability of machine learning models and the dynamic prioritization provided by a context-graph. The present invention surpasses these limitations by integrating LLMs to generate contextual insights and enhance the scalability of dependency analysis across various data sources.

CN118034661B focuses on leveraging data aggregation systems for identifying task dependencies, with a primary emphasis on efficiency in computational resource management. However, it does not incorporate the use of LLMs or context-graphs to dynamically score and rank tasks based on their relationships and enterprise relevance. The present invention innovates by combining these advanced methodologies to deliver a more comprehensive and efficient prioritization mechanism.

US20240256598A1 outlines methods for managing workflows using static hierarchical dependency trees. While this approach provides clarity in task prioritization, it lacks the flexibility and scalability of context-graphs that dynamically adapt to changing task attributes. Additionally, it does not employ machine learning for task enrichment or scoring. The present invention addresses these shortcomings by integrating LLMs with context-graphs to provide an adaptive and scalable solution for enterprise-level task management.
DEFINITIONS:
The expression “system” used hereinafter in this specification refers to an ecosystem comprising, but is not limited to a scoring system with a user, input and output devices, processing unit, plurality of mobile devices, a mobile device-based application to identify dependencies and relationships between diverse businesses, a visualization platform, and output; and is extended to computing systems like mobile, laptops, computers, PCs, etc.
The expression “input unit” used hereinafter in this specification refers to, but is not limited to, mobile, laptops, computers, PCs, keyboards, mouse, pen drives or drives.
The expression “output unit” used hereinafter in this specification refers to, but is not limited to, an onboard output device, a user interface (UI), a display kit, a local display, a screen, a dashboard, or a visualization platform enabling the user to visualize, observe or analyse any data or scores provided by the system.
The expression “processing unit” refers to, but is not limited to, a processor of at least one computing device that optimizes the scoring system.

OBJECTS OF THE INVENTION:
The primary object of the present invention is to provide a system and method for auto-prioritization of user stories using context-graph and large language models (LLM).
Yet another object of the present invention is to provide a system and method for auto-prioritization of user stories that efficiently identifies and analyses dependencies and relationships between tasks to enhance workflow optimization.
Yet another object of the present invention is to provide a system and method for auto-prioritization of user stories that utilizes large language models (LLMs) to generate enriched task attributes, tags, and contextual insights from raw data.
Yet another object of the present invention is to provide a system and method for auto-prioritization of user stories that constructs a context-graph that visualizes tasks and their interdependencies for better prioritization and decision-making.
Further, the object of the present invention is to provide a system and method for auto-prioritization of user stories that automates the scoring and ranking of user stories based on urgency, impact, and interconnections, enabling the creation of optimized roadmaps and sprint plans.

SUMMARY
Before the present invention is described, it is to be understood that the present invention is not limited to specific methodologies and materials described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention.
The present invention relates to a system and method for the auto-prioritization of tasks and user stories, addressing the challenges of managing dependencies and relationships within interconnected systems. Utilizing advanced methodologies, such as machine learning, context-graph construction, and historical data analysis, the system automates the prioritization process, providing a more efficient and reliable solution compared to traditional manual approaches. The invention comprises components including a data collection module, data expansion module, context-graph construction module, scoring module, and prioritization module, all working in harmony to process and analyse data from diverse sources like project management tools, hardware task trackers, and archival systems. The system collects raw data, enriches it with detailed attributes, and constructs a graphical representation of tasks and their interdependencies. By assigning scores based on urgency, impact, and relationships, the system prioritizes tasks systematically, generating actionable schedules and optimized execution plans.
This invention overcomes the limitations of conventional methods, which relied on inconsistent manual effort and were unable to scale with modern organizational demands. By employing analytical and computational techniques, the system delivers detailed reports, visualizations, and actionable insights, enabling organizations to streamline workflows, improve resource allocation, and enhance operational efficiency. The invention offers a robust framework for managing complex dependencies and optimizing task execution, ultimately driving better decision-making and improved outcomes.

BRIEF DESCRIPTION OF DRAWINGS
A complete understanding of the present invention may be made by reference to the following detailed description which is to be taken in conjugation with the accompanying drawing. The accompanying drawing, which is incorporated into and constitutes a part of the specification, illustrates one or more embodiments of the present invention and, together with the detailed description, it serves to explain the principles and implementations of the invention.

FIG. 1 illustrates an overview of the system of the present invention.
FIG. 2 illustrates the method of the present invention.

DETAILED DESCRIPTION OF INVENTION:
Before the present invention is described, it is to be understood that this invention is not limited to methodologies described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention. Throughout this specification, the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps. The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the invention to achieve one or more of the desired objects or results. Various embodiments of the present invention are described below. It is, however, noted that the present invention is not limited to these embodiments, but rather the intention is that modifications that are apparent are also included.
To understand the invention clearly, the various components of the system are referred as below:

No. Component
10 System
100 Input unit
200 Processing unit
300 Output unit
210 Data Collection Module
220 Data Expansion Module
230 Context-Graph construction Module
240 Scoring Module
250 Prioritization Module

The present invention is directed to a system and method for auto-prioritization of user stories using context-graph and large language models (LLM). The system operates by integrating various data sources, including code repositories, project management tools, and documentation. It uses LLMs to enrich the raw data, generate contextual insights, and organize the information in a context-graph. The purpose of the system is to automate the prioritization of user stories by analysing relationships, dependencies, and the context in which they exist. FIG. 1 illustrates an overview of the system of the present invention. The system (10) comprises an input unit (100), a processing unit (200) and output unit (300), further comprising a data collection module (210), a data expansion module(220) utilizing advanced machine learning techniques, a context-graph construction module (230), a scoring module (240), and a prioritization module (250). The system operates to collect data from various sources, enrich and expand the information, and construct a context-graph to analyse task dependencies and prioritize them effectively. By employing automated analysis and scoring mechanisms, the system(10) enables streamlined decision-making processes and enhanced workflow management.
According to an embodiment of the present invention, the data collection module (210) acts as the primary interface for aggregating information from diverse sources, such a code repository, project management tools, and documentation.. This module ensures the data is properly categorized and prepared for further analysis. The data expansion module(220) employs advanced machine learning techniques to generate detailed attributes, such as task type, intent, urgency, and related tags. This structured representation ensures consistency and facilitates accurate mapping of relationships and dependencies. The context-graph construction module (230) visualizes tasks and their interdependencies in a directed graph format. The nodes in this graph represent tasks or user stories, while the edges represent dependencies, similarities, or other contextual relationships.
According to an embodiment of the present invention, the scoring module (240) of the system analyses tasks using their attributes, tags, and identified dependencies to assign a priority score. This module ensures that tasks with high urgency or significant impact are scored higher, enabling informed prioritization. The prioritization module (250) organizes tasks into a priority sequence based on their scores, ensuring that critical tasks are addressed first. This module creates actionable schedules and roadmaps, optimizing resource allocation and task execution.
In a further embodiment, the output unit delivers comprehensive reports and visualizations, including prioritized task lists, dependency hierarchies, and actionable recommendations. These outputs enable administrators to make well-informed decisions and improve operational efficiency.
In another preferred embodiment of the invention, a method for auto-prioritization of tasks and user stories is disclosed, wherein the system (10) identifies and evaluates dependencies and relationships among tasks. This method enables the system (10) to analyse, interpret, and derive actionable insights into interdependencies and contextual relationships to optimize task prioritization and execution. The method comprises the following steps:
1. Data Collection Module (210):
The data collection module (210) is designed to aggregate task-related information from diverse sources, including a code repository, project management tools, and documentation. This module systematically gathers raw data, ensuring that essential details, such as task descriptions, metadata, and associated tags, are accurately captured. The collected data is organized into a structured format, such as datasets or tables, enabling seamless processing by downstream modules. This module ensures comprehensive data acquisition, forming the foundation for subsequent enrichment and analysis.
Example: The data collection module connects to a repository, extracting commit history, open pull requests, and issue tracking information. Additionally, it connects to a project management tool to retrieve details of user stories, task descriptions, and their statuses. It also pulls relevant documentation from a shared knowledge base to gather additional context for the tasks.
2. Data expansion module (220):
The data expansion module(220) utilizes advanced machine learning techniques to enhance the raw data collected. This process involves deriving detailed attributes for each task, such as its type (e.g., enhancement, bug fix), intent (e.g., new feature, maintenance), urgency, and associated tags. By incorporating contextual insights, the module generates a structured representation of each task, facilitating accurate mapping of relationships. The enriched data enhances downstream processing by enabling precise analysis of task attributes and interdependencies. Once the raw data is pulled from these sources, the system uses a large language model (LLM) to expand the data. This involves parsing the data, generating detailed explanations of tasks, dependencies, or related work, and creating meaningful attributes and tags
Example: A task pulled from the project management tool may be listed as “Fix bug in feature X.” Using an LLM, the system can generate additional attributes like “bug severity,” “frontend,” and “impact area,” and further explain the task with expanded details like “Address the UI bug in feature X where the page fails to load on mobile devices.”
3. Context-Graph Construction Module (230):
The context-graph construction module (230) organizes tasks in a graph structure, with nodes representing tasks, user stories, or other units of work, and edges representing relationships between tasks, such as dependencies, similarity in intent, or common attributes. The system queries the LLM to derive relationships and dependencies between tasks. This helps in understanding how one task may affect others. The system then accesses backlog or requirement data and further enhances it using LLMs to expand on the tasks’ attributes, tags, and intent. It identifies tasks that are most crucial based on dependencies and relationships.
Example: In the context-graph, one node might represent the task “Fix bug in feature X,” and another node might represent the task “Improve UI responsiveness in feature X.” An edge will exist between the two nodes, indicating that the UI responsiveness improvement is dependent on fixing the bug.
4. Scoring Module (240):
The scoring module (240) calculates a score for each task based on its attributes, dependencies, and relationships, which can be derived from the context-graph. It assigns priority scores to tasks based on their attributes, interdependencies, and potential impact. The module evaluates urgency, task type, associated tags, and contextual relevance to calculate a weighted score for each task. Higher scores are assigned to tasks that are time-sensitive, highly dependent on others, or have a significant impact on workflow. The scoring process ensures that prioritization is systematic, objective, and aligned with organizational goals. Each task and user story is scored based on:
- The urgency (e.g., bug vs. new feature).
- Dependencies (tasks with critical dependencies may get higher priority).
- Tags and attributes (tags like “urgent,” “critical,” “frontend” can affect priority).
Example: Tasks that are highly dependent on other tasks (e.g., “Fix critical bug”) or have urgent attributes (e.g., “frontend,” “high severity”) would receive a higher score, indicating a higher priority in the backlog.
5. Prioritization Module (250):
The prioritization module (250) ranks tasks based on their score, allowing for prioritization of user stories according to urgency, importance, and dependency. It organizes tasks into a ranked sequence based on their calculated scores. This module generates actionable task lists and schedules, ensuring that critical tasks are addressed first. By creating optimized roadmaps and sprint plans, the prioritization module (250) enables efficient resource allocation and workflow management. It serves as the central decision-making component, guiding the execution process. The system uses the scores derived from the tasks to prioritize the user stories. This helps in creating a roadmap or sprint plan, ensuring that the most important tasks are completed first.
Example: The prioritization module would ensure that tasks like “Fix critical bug in feature X” with a high score are placed at the top of the sprint backlog, while less urgent tasks like “Implement new feature Y” with a lower score are placed later.
6. Output Generation Module (260):
The output generation module (260) produces detailed reports and visualizations, including prioritized task lists, dependency hierarchies, and actionable recommendations. These outputs provide administrators with a clear understanding of the task landscape and help in resolving bottlenecks. The reports are designed to support strategic decision-making, enhance system (10) integration, and improve overall operational efficiency.
According to the embodiment of the present invention, the context-graph construction module (230) uses nodes to represent tasks, user stories, or other units of work, and edges to represent relationships between tasks.
Example: In the context-graph construction module (230), each node is a task or user story, such as “Develop login page” or “Fix bug in feature Z.” Relationships are represented by edges, for instance, an edge might connect “Develop login page” to “Integrate login API” to show the dependency between the two tasks. The system uses this graph to identify which tasks can be worked on concurrently and which tasks are blocking others.
The context-graph construction module (230) can also include weighted edges that represent the strength of the relationship. For example, if Task A is highly dependent on Task B, the edge between them will have a higher weight, signifying that Task A cannot be completed until Task B is finished.
According to the embodiment of the present invention, the data expansion module (220) utilizes a large language model to generate meaningful tags and attributes for each task based on context, and the tags are used to link similar tasks.
Example: The data expansion module extracts tasks like “Improve search feature” and generates tags such as “search,” “backend,” and “performance.” These tags are used to link the task to other tasks in the context-graph that also share similar attributes or intent, such as “Optimize database queries” or “Refactor search algorithm.” Tasks with similar tags are grouped together to create a more coherent and organized backlog. The tags generated by the LLM can be multi-dimensional, encompassing aspects such as task type (bug, feature, improvement), team (frontend, backend, full-stack), or urgency (critical, low priority). These help the system not only classify tasks but also enable automated grouping and prioritization based on shared characteristics.
According to the embodiment of the present invention, Fig. 2 describes the method of the present invention,. The method for prioritizing tasks using a context-graph, comprises the following steps:
1. Collecting data from multiple sources, including code repositories, project management tools, and documentation.
Example: The system collects data such as task details , code commit history, and relevant documentation from different sources. This aggregation allows the system to have a holistic view of the project, ensuring that no task or context is overlooked.
2. Expanding the data using a large language model to generate detailed attributes, tags, and intent descriptions from the raw data.
Example: The task “Optimize API response time” might be expanded by the LLM to include details like the expected performance improvements, the impacted components, and the urgency due to its impact on user experience. The tags generated might include “performance,” “backend,” “critical,” and “API.”
3. Storing the expanded data in a context-graph, with nodes representing tasks and edges representing the relationships and dependencies between them.
Example: Once the tasks are expanded with attributes, tags, and intent, they are added to the context-graph. Tasks that have direct dependencies, like “Fix bug in feature A” and “Test feature A,” are linked with edges. Additionally, tasks that share tags like “frontend” might be grouped together.
4. Analysing the relationships and dependencies using the context-graph, where tasks are evaluated based on their connections with other tasks.
Example: The system might identify that a task with high priority, like “Fix critical bug,” has many dependent tasks. These dependent tasks are considered in the prioritization process to ensure that no critical blockers remain in the backlog.
5. Scoring each task based on its attributes, dependencies, and relationships, with higher scores being assigned to tasks that are urgent, high-impact, or have many dependencies.
Example: A task like “Fix security vulnerability in payment system” would receive a high score because it is critical for security and could block the deployment of other features.
6. Prioritizing tasks based on their score, ensuring that the most urgent and high-impact tasks are addressed first.
Example: The scoring system ensures that tasks like “Fix security vulnerability in payment system” and “Resolve critical bugs” are given higher priority in the backlog, while tasks like “Implement new feature Y” or “UI improvement” might be scheduled for later sprints.
The present invention offers significant advantages by revolutionizing the process of prioritizing tasks and user stories through a comprehensive, data-driven approach. By incorporating advanced components such as the data collection module (210) for aggregating information, the data expansion module(220) for deriving detailed attributes, the scoring module (240) for evaluating urgency and impact, and the context-graph construction module (230) for mapping relationships and dependencies, the invention ensures accurate prioritization and task sequencing. Additionally, the dependency evaluation module identifies critical interrelations, while the output generation module (260) provides actionable insights and optimized schedules for seamless implementation. This approach addresses the limitations of traditional methods, offering a robust solution that enables efficient analysis, prioritization, and execution of tasks, ultimately enhancing workflow management and improving operational outcomes.
While considerable emphasis has been placed herein on the specific elements of the preferred embodiment, it will be appreciated that many alterations can be made and that many modifications can be made in preferred embodiment without departing from the principles of the invention. These and other changes in the preferred embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the invention and not as a limitation.
, Claims:We claim,
1. A system and method for auto-prioritization of user stories using context-graph and large language models
characterized in that:
the system (10) comprises an input unit (100), a processing unit (200) further comprising a data collection module (210), data expansion module (220), a context-graph construction module (230), a scoring module (240), a prioritization module (250), and an output unit (300);
such that the data collection module (210) systematically gathers and organizes data from multiple sources, including a code repository, project management tools, and documentation, ensuring compatibility with subsequent processing; the data expansion module (220) employs machine learning techniques to extract and generate detailed attributes, structuring data for precise analysis; the context-graph construction module (230) creates a graph-based visualization of tasks and their interdependencies, dynamically updating relationships and hierarchies as new data is integrated; the scoring module (240) evaluates tasks based on their attributes, dependencies, and impact, assigning priority scores to guide decision-making; the prioritization module (250) organizes tasks into an optimized sequence based on their scores, generating actionable schedules and execution plans; the output generation module produces detailed reports, including prioritized task lists, dependency hierarchies, and actionable insights, enabling improved workflow management and decision-making;
and the method for auto-prioritization of user stories comprises the steps of:
a) collecting data from multiple sources, including code repositories, project management tools, and documentation;
b) expanding the data using a large language model to generate detailed attributes, tags, and intent descriptions from the raw data;
c) storing the expanded data in a context-graph, with nodes representing tasks and edges representing the relationships and dependencies between them.
d) analysing the relationships and dependencies using the context-graph, where tasks are evaluated based on their connections with other tasks.
e) scoring each task based on its attributes, dependencies, and relationships, with higher scores being assigned to tasks that are urgent, high-impact, or have many dependencies.
f) prioritizing tasks based on their score, ensuring that the most urgent and high-impact tasks are addressed first.

2. The system and method as claimed in claim 1, wherein the data collection module (210) connects to a repository, extracting commit history, open pull requests, and issue tracking information; it connects to a project management tool to retrieve details of user stories, task descriptions, and their statuses and pulls relevant documentation from a shared knowledge base to gather additional context for the tasks.

3. The system and method as claimed in claim 1, wherein the data expansion module (220) utilizes a large language model to generate meaningful tags and attributes for each task based on context, and the tags are used to link similar tasks.

4. The system and method as claimed in claim 1, wherein the data expansion module (220) derives detailed attributes for each task, including its type, intent, urgency, and associated tags.

5. The system and method as claimed in claim 1, wherein the context-graph construction module (230) uses nodes to represent tasks, user stories, or other units of work, and edges to represent relationships between tasks.
6. The system and method as claimed in claim 1, wherein the context-graph construction module (230) includes weighted edges that represent the strength of the relationship.
7. The system and method as claimed in claim 1, wherein in the scoring module (240) each task and user story is scored based on the urgency, dependencies, tags and attributes.

Documents

Application Documents

# Name Date
1 202521001042-STATEMENT OF UNDERTAKING (FORM 3) [06-01-2025(online)].pdf 2025-01-06
2 202521001042-POWER OF AUTHORITY [06-01-2025(online)].pdf 2025-01-06
3 202521001042-FORM 1 [06-01-2025(online)].pdf 2025-01-06
4 202521001042-FIGURE OF ABSTRACT [06-01-2025(online)].pdf 2025-01-06
5 202521001042-DRAWINGS [06-01-2025(online)].pdf 2025-01-06
6 202521001042-DECLARATION OF INVENTORSHIP (FORM 5) [06-01-2025(online)].pdf 2025-01-06
7 202521001042-COMPLETE SPECIFICATION [06-01-2025(online)].pdf 2025-01-06
8 Abstract1.jpg 2025-02-21
9 202521001042-POA [22-02-2025(online)].pdf 2025-02-22
10 202521001042-MARKED COPIES OF AMENDEMENTS [22-02-2025(online)].pdf 2025-02-22
11 202521001042-FORM 13 [22-02-2025(online)].pdf 2025-02-22
12 202521001042-AMMENDED DOCUMENTS [22-02-2025(online)].pdf 2025-02-22
13 202521001042-FORM-9 [25-09-2025(online)].pdf 2025-09-25
14 202521001042-FORM 18 [01-10-2025(online)].pdf 2025-10-01