Sign In to Follow Application
View All Documents & Correspondence

System And Method For Goal Oriented Macro Automation Of Application Development

Abstract: ABSTRACT Title: A SYSTEM AND METHOD FOR GOAL-ORIENTED MACRO-AUTOMATION OF APPLICATION DEVELOPMENT A system and method for goal-oriented macro-automation of application development, translating high-level business objectives into actionable, traceable, and executable workflows; wherein the system (100) comprises a data ingestion layer (10), a knowledge web construction layer (20), a lineage and dependency workflow module (30), a historical similarity analyzer (40), an impact analysis engine (50), a macro generation engine (60), and an orchestrator execution module (70). The data ingestion layer (10) normalizes heterogeneous artifacts into canonical forms, while the knowledge web construction layer (20) organizes them into a multi-layer graph ensuring complete traceability. The lineage and dependency workflow module (30) builds dependency graphs, the similarity analyzer (40) accelerates execution via historical reuse, and the impact engine (50) ranks affected modules. The macro generation engine (60) compiles goals into directed acyclic graphs with governance gates, executed iteratively by the orchestrator (70) through deterministic, generative, and validation nodes. This hybrid approach enables efficient, compliant, and adaptive development automation.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
02 September 2025
Publication Number
41/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Persistent Systems
Bhageerath, 402, Senapati Bapat Rd, Shivaji Cooperative Housing Society, Gokhale Nagar, Pune - 411016, Maharashtra, India.

Inventors

1. Mr. Nitish Shrivastava
10764 Farallone Dr, Cupertino, CA 95014-4453, United States.

Specification

Description:FIELD OF INVENTION
The present invention relates to automation framework. More particularly it relates to a system and method for goal-oriented macro-automation of application development for translating high-level business objectives into actionable, traceable, and executable application development workflows using generative artificial intelligence and knowledge driven orchestration mechanisms.

BACKGROUND
In modern enterprises, computer-based application development is guided by business objectives such as reducing response time, enhancing application availability, ensuring regulatory compliance, or improving user experience. Translating such high-level objectives into actionable development tasks typically requires substantial manual intervention across different roles in the software lifecycle. Product managers draft epics and stories, developers identify impacted modules and implement code changes, quality engineers build test cases and benchmarks, operations teams configure monitoring dashboards, and compliance officers validate regulatory alignment.
Modern software engineering increasingly relies on automation to accelerate development cycles, improve quality, and ensure compliance with business objectives. While task-specific automation exists—for coding, testing, deployment, and monitoring—the emerging need is for end-to-end macro-automation that bridges high-level business goals with low-level development artifacts. Such macro-automation enables organizations to translate abstract objectives (e.g., reducing latency or improving compliance) into actionable workflows that span requirements, code, testing, and operations. Advances in graph lineage algorithms, historical similarity analysis, and generative AI models make it possible to map business intent to technical implementation with greater accuracy, transparency, and adaptability. This vision aligns with the broader trend of building intelligent, self-orchestrating development environments that minimize manual mediation and ensure traceability across the entire software lifecycle.
However, existing systems face significant drawbacks. Computer-based application organizations still rely heavily on human mediation to connect business objectives with concrete development artifacts. Product managers write epics and stories, developers interpret tasks and identify impacted modules, test engineers design performance suites, operations teams build dashboards and monitoring, and governance officers validate compliance. This fragmented and manual process is slow, error-prone, and lacks end-to-end traceability between intent and execution. Current tools such as Jira, GitHub, Jenkins, and Confluence operate in silos, offering automation only within narrow scopes but failing to provide an orchestrated framework that unifies objectives, code, and validation. As a result, organizations struggle to maintain alignment between business goals and application artifacts, leading to inefficiencies, compliance risks, and missed opportunities for optimization.
Thus, there is a need for providing a system and method that provides end-to-end macro-automation that bridges high-level business goals with low-level development artifacts using graph lineage workflows, historical similarity analysis, and generative AI models.

DEFINITIONS
The expression “large language models” or “LLMs” used hereinafter in this specification refers to systems that use natural language understanding to interpret and generate text. In this system, they help extract features and suggest metrics.
The term “DAG (Directed Acyclic Graph)” used hereinafter in this specification refers to data structure used in software engineering to represent workflows, where nodes represent tasks and directed edges indicate dependencies between them.
The term “JSON (JavaScript Object Notation)” used hereinafter is this specification refers to lightweight, text-based format used for storing and exchanging data used in web development for communication between servers and clients due to its readability.
The term “artifact” used hereinafter in this specification refers to byproduct of software development that helps describe the architecture, design and function of software.
The term “macro” used hereinafter in this specification refers to pre-defined sequence of automated inputs designed to streamline repetitive tasks which acts as a set of instructions that can be executed to perform those tasks automatically.
The term “orchestrator” used hereinafter in this specification refers to a tool or system that coordinates and manages multiple automated tasks or processes across different systems, ensuring they execute in the correct sequence to achieve a desired outcome.
The term “similarity analyser” use hereinafter in this specification refers to the activity that embraces a broad range of applications, including but not limited to code recommendation, duplicate code, plagiarism, malware, and smell detection.
The term “metadata” used hereinafter in this specification refers to the spectrum of information, such as timestamps, file formats, authorship, etc which is fundamental to understanding, managing, and extracting value from data assets.
The term “metadata tuple” used hereinafter in this specification is defined as the canonical representation of an artifact after ingestion and normalization. Essentially, the metadata tuple is a uniform schema that allows the system to treat heterogeneous artifacts (from Git, Jira, Confluence, CI/CD, monitoring, etc.) consistently, so they can all be stitched together into the knowledge web and dependency graph.
The term “request for comments or RFC document” used hereinafter in this specification refers to an internal (or sometimes external) document that captures proposed changes, architectural decisions, design patterns, or improvement ideas for an application system, serving as a collaborative specification that team members can review, comment on, and refine before or alongside implementation.

OBJECTS OF THE INVENTION
The primary object of the invention is to provide system and method for goal-oriented macro-automation of application development.
Yet another object of the invention is to ingest and unify artifacts from multiple enterprise systems including source control repositories (Git), project management tools (Jira), documentation platforms (Confluence/SharePoint), CI/CD pipelines, and observability dashboards.
Yet another object of the invention is to construct a knowledge web of artifacts with explicit lineage and dependency relationships, thereby ensuring end-to-end traceability between goals, code, tests, and governance requirements.
Yet another object of the invention is to implement dependency workflows that combine embeddings, graph traversal techniques, and statistical co-change analysis for accurate linking of related artifacts.
Yet another object of the invention is to leverage similarity matching against historical releases and pull requests in order to accelerate solution discovery and reuse proven development patterns.
Yet another object of the invention is to enable hybrid navigation of codebases using large language models together with static analysers to precisely locate impact zones for a given business goal.
Yet another object of the invention is to automatically generate macros consisting of subtasks such as user stories, code changes, configuration manifests, test suites, and monitoring dashboards.
The final object of the invention is to validate execution outcomes against governance rules, compliance guidelines, and performance targets, thereby ensuring quality, security, and regulatory adherence.

SUMMARY
Before the present invention is described, it is to be understood that the present invention is not limited to specific methodologies and materials described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention.
The invention discloses a system and method for goal-oriented macro-automation of application development comprising a data ingestion layer, a knowledge web construction layer, a lineage and dependency workflow module, a historical similarity analyzer, an impact analysis engine, a macro generation engine, and an orchestrator execution module. These components together provide an end-to-end framework for translating high-level business objectives into actionable workflows by unifying data, constructing lineage-aware graphs, analyzing historical patterns, generating dependency-driven macros, and executing them through a hybrid orchestrator.
In an aspect of the invention, the data ingestion layer captures heterogeneous artifacts from repositories, project management tools, knowledge bases, observability platforms, and governance metadata, and normalizes them into canonical representations. The knowledge web layer builds a multi-layer lineage graph that links goals, stories, code, tests, deployment artifacts, and policies with semantic relationships for traceability. The dependency workflow module establishes a probabilistic graph of relationships using embeddings, structural analysis, co-change history, and governance overlays. The historical similarity analyzer matches current goals with prior implementations to enable reuse of validated solutions. The impact analysis engine identifies and ranks modules most likely to be affected by a goal using static analysis, LLM reasoning, and historical co-change. The macro generation engine translates goals into directed acyclic graphs of subtasks and dependencies, inserting governance checks. Finally, the orchestrator module executes these macro graphs through deterministic nodes, generative nodes, and validation nodes in an iterative loop.
In a preferred aspect of the invention, the method begins with the user entering a business goal, which is parsed and linked to KPIs and project management artifacts. Data ingestion fetches and normalizes all relevant artifacts, while the knowledge web constructs lineage relationships. Historical similarity analysis provides reusable exemplars, and impact analysis ranks affected modules. The macro generation engine decomposes the goal into subtasks, maps them to relevant artifacts, and builds a dependency-ordered macro graph with governance gates. The orchestrator executes the graph node by node, generating code, running tests, performing compliance checks, deploying to performance environments, and validating observability metrics. Failed executions trigger regeneration or retries, while successful executions update repositories, dashboards, project tools, and documentation, recording full lineage and saving patterns as reusable templates for future goals.

BRIEF DESCRIPTION OF DRAWINGS
A complete understanding of the present invention may be made by reference to the following detailed description which is to be taken in conjugation with the accompanying drawing. The accompanying drawing, which is incorporated into and constitutes a part of the specification, illustrates one or more embodiments of the present invention and, together with the detailed description, it serves to explain the principles and implementations of the invention.
FIG.1. illustrates the structural and functional components of the system.
FIG.2. illustrates the stepwise workflow.

DETAILED DESCRIPTION OF INVENTION
Before the present invention is described, it is to be understood that this invention is not limited to methodologies described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention. Throughout this specification, the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps. The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the invention to achieve one or more of the desired objects or results. Various embodiments of the present invention are described below. It is, however, noted that the present invention is not limited to these embodiments, but rather the intention is that modifications that are apparent are also included.
The present invention discloses a system and method for goal-oriented macro-automation of application development for translating high-level business objectives into actionable, traceable, and executable application development workflows using generative artificial intelligence and knowledge driven orchestration mechanisms. The system (100) comprises of a data ingestion layer (10), a knowledge web construction layer (20), a lineage and dependency workflow module (30), a historical similarity analyzer (40), an impact analysis engine (50), a macro generation engine (60), and an orchestrator execution module (70); such that the structural and functional components work in combination to employ a systematic stepwise workflow for goal-oriented macro-automation.
In an embodiment of the invention, the data ingestion layer (10) serves as the entry point for capturing and unifying diverse sources of structured and unstructured data across the application ecosystem; such that it continuously ingests artifacts from repositories (commits, branches, diffs, pull requests), project management tools (epics, stories, sprints), knowledge bases (Confluence pages, SharePoint documents), observability systems (dashboards, Prometheus/Grafana alerts), and governance metadata (security policies, compliance checklists). Each artifact, regardless of its origin or format, is systematically transformed into a canonical representation defined as a metadata tuple wherein Artifact = {Type, Content, Metadata, Relationships}; wherein type represents the category of the artifact (e.g., commit, pull request, Jira story, Confluence doc, test case, dashboard, policy); content represents the main payload of the artifact (e.g., source code, document text, test script, YAML config); metadata represent descriptive attributes that give context (timestamps, author, version, file format, labels, tags, commit ID, etc.); and relationships represent the explicit or inferred links to other artifacts (e.g., “validated_by test case,” “depends_on module,” “documented_in wiki page,” “constrained_by compliance policy”). This canonicalization ensures uniform handling of heterogeneous inputs thereby normalizing all artifacts into a consistent structure.
In yet another embodiment of the invention, the knowledge web construction layer (20) builds a multi-layer graph that unifies business objectives, development artifacts, and governance controls into a lineage-aware web wherein the graph is organized into distinct layers including:
1. layer 1: goals/epics,
2. layer 2: stories/tasks,
3. layer 3: code modules/files,
4. layer 4: tests & benchmarks,
5. layer 5: deployment/monitoring artifacts, and
6. layer 6: governance/policies.
Further, the web construction layer (20) provides rich semantic relationships established across the multiple layers through edges representing various parameters such as depends_on (e.g., task → module), validated_by (e.g., module → test), constrained_by (e.g., module → governance policy), and documented_in (e.g., module → confluence page); thereby enabling encoding both hierarchy and dependencies, complete traceability across artifacts, ensuring that a change in one layer automatically propagates its impact throughout the graph.
In a next embodiment of the invention, the lineage & dependency workflow module (30) establishes a probabilistic dependency graph that links heterogeneous application artifacts into a unified structure, enabling accurate impact analysis and traceability. The process begins with the steps as;
1. Vector embedding, wherein the content of each artifact is represented using modality-specific embeddings (e.g. text embeddings for documents and code embeddings for source files).
2. Similarity matching; that computes cosine similarity scores to detect semantic closeness between artifacts;
3. Structural analysis, where code is parsed into abstract syntax trees (ASTs) and call graphs to extract explicit dependencies such as imports and API calls;
4. Statistical co-change analysis, enables refinement through which leverages Git history to compute co-change frequencies and assigns edge weights proportional to the co- change probability;
5. Governance overlay; that links artifacts to compliance rules by combining keyword matching with embedding-based alignment; and
6. Edge pruning; that applies a confidence threshold (θ) to remove weak links, resulting in a high-confidence, multi-modal dependency graph that integrates technical, semantic, and governance relationships.
In yet a next embodiment of the invention, the historical similarity analyzer (40) employs a historical work matching process to accelerate goal execution by leveraging prior solutions. A given goal is first represented as a goal embedding (g), which is compared against a library of historical pull request embeddings {p₁, p₂, …}. Using similarity computations, the system evaluates the closeness between the current goal and past development work. The most relevant results are then ranked as the top-k historical pull requests, which serve as reference exemplars. From these matches, the system transfers associated code modules, test cases, and implementation patterns into the current workflow, enabling efficient reuse of validated solutions. This process not only reduces redundancy in engineering efforts but also enhances consistency, reliability, and compliance by grounding new implementations in historically proven work.
In yet a next embodiment, the impact analysis engine (50) identifies and prioritizes application components likely to be affected by a given change or goal, enabling proactive risk assessment and efficient planning where it first applies static analysis techniques, including call hierarchy traversal, control flow inspection, and import resolution, to derive a structural view of dependencies. In parallel, it leverages LLM Navigation, where a large language model is prompted with a natural language goal (e.g., “Given this goal: reduce checkout latency, locate likely modules impacted in repo”) to produce a ranked list of candidate modules. The system then applies a hybrid voting mechanism, combining outputs from static analysis, LLM reasoning, and historical co-change statistics into a unified confidence score, computed as Score_final = α*StaticScore + β*LLMScore + γ*HistoricalScore.
In yet a next embodiment of the invention, the macro generation engine (60) translates high-level goals into executable workflows represented as macro graphs in the form of directed acyclic graphs (DAGs) wherein the nodes denote subtasks such as creating a Redis cache layer, refactoring a database query, or generating a test suite, while edges represent dependencies, ensuring proper sequencing (e.g., refactoring must precede testing). At its core, a micro compiler processes the dependency graph (G) and a set of impacted modules (M) to construct the macro graph using the steps of;
1. decomposing the goal into a set of subtasks {s₁, s₂, …} using a large language model;
2. mapping subtasks to corresponding modules, tests, or monitoring dashboards within the system; and
3. building the DAG, where dependencies are ordered topologically and governance checks are automatically injected as blocking nodes to enforce compliance. The result is a lineage-aware macro that orchestrates development, testing, deployment, and governance in a unified execution plan.
In yet a further embodiment of the invention, the orchestrator execution module (70) is configured for executing the macro graph (DAG) through hybrid node types; wherein the orchestrator module handles the deterministic nodes, such as test runners, linters, and CI pipelines, which perform repeatable and rule-based operations; the generative nodes, where large language model (LLM) agents dynamically produce artifacts including code snippets, documentation, or configuration files; or the validation nodes, which apply anomaly detection to performance metrics and invoke compliance validators to ensure adherence to organizational and regulatory standards. Execution proceeds in an iterative fashion, such that failed or low-confidence nodes automatically trigger regeneration, alternative path selection, or retry mechanisms. This hybrid orchestration ensures both reliability and adaptability, allowing deterministic processes to provide stability while generative and validation mechanisms introduce flexibility and intelligence into the workflow.
In a preferred embodiment of the invention, the system of the present invention employs a method comprising the steps of:
- entering goal by the user in the system (e.g., “reduce checkout latency by 30%”),
- parsing goal and extracting KPIs/constraints by the system,
- creating/linking epic in PM tool by the system,
- fetching repos/documents From VCS by data ingestion,
- fetching epics/requirements from pm tool by data ingestion,
- fetching docs/RFCs from confluence/sharepoint by data ingestion,
- fetching metrics/traces from observability by data ingestion,
- normalizing to canonical artifacts by data ingestion,
- upserting entities and metadata by knowledge web,
- embedding content by knowledge web,
- inferring initial lineage edges (asset→code-change) by knowledge web,
- computing goal embedding by historical similarity,
- ranking prior releases/prs/tickets by similarity by historical similarity,
- selecting top-k templates by historical similarity,
- performing static analysis (AST/taint/implication graph) by impact analysis,
- performing LLM code navigation (prompts over repo) by impact analysis,
- looking up historical hotspots by impact analysis,
- computing score_final = α·static + β·LLM + γ·history by impact analysis,
- selecting impacted modules/apis/tests by impact analysis,
- decomposing goal into stories/subtasks by macro compilation,
- mapping subtasks to modules/services by macro compilation,
- inserting governance gates (policy/compliance/capex budgets) by macro compilation,
- building macro DAG (nodes + dependencies) by macro compilation,
- queuing DAG for execution by orchestrator,
- picking next ready node from DAG by orchestrator,
- checking whether node is deterministic by orchestrator,
- generating code/boilerplates with LLM for non-deterministic nodes by orchestrator,
- running scripts/runners/builds/tests for deterministic nodes by orchestrator,
- proposing patch set by orchestrator,
- opening/updating PR in VCS with lineage tags by orchestrator,
- validating with governance policy/compliance checks by orchestrator,
- proceeding when validations pass by orchestrator,
- issuing fix directives when validations fail by orchestrator,
- regenerating/modifying patch by orchestrator,
- updating PR by orchestrator,
- triggering CI/CD build→deploy to perf env by orchestrator,
- running performance/integration tests by orchestrator,
- collecting latency deltas (p95/p99), errors, coverage by orchestrator,
- checking observability satisfaction by orchestrator,
- creating/updating dashboards and SLO alerts by orchestrator,
- pushing panels with new metrics by orchestrator,
- evaluating whether remaining executable nodes exist by orchestrator,
- looping to pick next node when nodes remain by orchestrator,
- declaring all nodes executed (DAG complete) by orchestrator,
- triggering next δ-goals improvement by orchestrator,
- auto-completing stories and epic with evidence in PM tool by orchestrator,
- exiting refinement plan (diagnostics + next tactics) by orchestrator,
- publishing RFC/runbook updates in docs by orchestrator,
- updating macro-DAG in cache/query plan/indexing by orchestrator,
- recording full lineage (goal→stories→PRs→tests→dashboards) in knowledge web by orchestrator,
- re-running orchestrator loop for subsequent goals by orchestrator,
- saving successful patterns as reusable templates by orchestrator.
While considerable emphasis has been placed herein on the specific elements of the preferred embodiment, it will be appreciated that many alterations can be made and that many modifications can be made in preferred embodiment without departing from the principles of the invention. These and other changes in the preferred embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the invention and not as a limitation. , Claims:CLAIMS:
We claim,
1. A system and method for goal-oriented macro-automation of application development for translating high-level business objectives into actionable, traceable, and executable application development workflows;
Wherein the system (100) comprises of a data ingestion layer (10), a knowledge web construction layer (20), a lineage and dependency workflow module (30), a historical similarity analyzer (40), an impact analysis engine (50), a macro generation engine (60), and an orchestrator execution module (70);

characterized in that:
ferred embodiment of the invention, the system of the present invention employs a method comprising the steps of:
- entering goal by the user in the system (e.g., “reduce checkout latency by 30%”),
- parsing goal and extracting KPIs/constraints by the system,
- creating/linking epic in PM tool by the system,
- fetching repos/documents From VCS by data ingestion,
- fetching epics/requirements from pm tool by data ingestion,
- fetching docs/RFCs from confluence/sharepoint by data ingestion,
- fetching metrics/traces from observability by data ingestion,
- normalizing to canonical artifacts by data ingestion,
- upserting entities and metadata by knowledge web,
- embedding content by knowledge web,
- inferring initial lineage edges (asset→code-change) by knowledge web,
- computing goal embedding by historical similarity,
- ranking prior releases/prs/tickets by similarity by historical similarity,
- selecting top-k templates by historical similarity,
- performing static analysis (AST/taint/implication graph) by impact analysis,
- performing LLM code navigation (prompts over repo) by impact analysis,
- looking up historical hotspots by impact analysis,
- computing score_final = α·static + β·LLM + γ·history by impact analysis,
- selecting impacted modules/apis/tests by impact analysis,
- decomposing goal into stories/subtasks by macro compilation,
- mapping subtasks to modules/services by macro compilation,
- inserting governance gates (policy/compliance/capex budgets) by macro compilation,
- building macro DAG (nodes + dependencies) by macro compilation,
- queuing DAG for execution by orchestrator,
- picking next ready node from DAG by orchestrator,
- checking whether node is deterministic by orchestrator,
- generating code/boilerplates with LLM for non-deterministic nodes by orchestrator,
- running scripts/runners/builds/tests for deterministic nodes by orchestrator,
- proposing patch set by orchestrator,
- opening/updating PR in VCS with lineage tags by orchestrator,
- validating with governance policy/compliance checks by orchestrator,
- proceeding when validations pass by orchestrator,
- issuing fix directives when validations fail by orchestrator,
- regenerating/modifying patch by orchestrator,
- updating PR by orchestrator,
- triggering CI/CD build→deploy to perf env by orchestrator,
- running performance/integration tests by orchestrator,
- collecting latency deltas (p95/p99), errors, coverage by orchestrator,
- checking observability satisfaction by orchestrator,
- creating/updating dashboards and SLO alerts by orchestrator,
- pushing panels with new metrics by orchestrator,
- evaluating whether remaining executable nodes exist by orchestrator,
- looping to pick next node when nodes remain by orchestrator,
- declaring all nodes executed (DAG complete) by orchestrator,
- triggering next δ-goals improvement by orchestrator,
- auto-completing stories and epic with evidence in PM tool by orchestrator,
- exiting refinement plan (diagnostics + next tactics) by orchestrator,
- publishing RFC/runbook updates in docs by orchestrator,
- updating macro-DAG in cache/query plan/indexing by orchestrator,
- recording full lineage (goal→stories→PRs→tests→dashboards) in knowledge web by orchestrator,
- re-running orchestrator loop for subsequent goals by orchestrator,
- saving successful patterns as reusable templates by orchestrator.

2. The system and method as claimed in claim 1, wherein the data ingestion layer (10) captures and unifies diverse sources of structured and unstructured data across the application ecosystem; such that it continuously ingests artifacts from repositories, project management tools, knowledge bases, observability systems, and governance metadata; such that each artifact is systematically transformed into a canonical representation that ensures uniform handling of heterogeneous inputs.

3. The system and method as claimed in claim 1, wherein the knowledge web construction layer (20) builds a multi-layer graph that unifies business objectives, development artifacts, and governance controls into a lineage-aware web wherein the graph is organized into distinct layers including goals/epics; stories/tasks; code modules/files; tests & benchmarks; deployment/monitoring artifacts; and governance/policies; and provides rich semantic relationships established across the multiple layers through edges representing various parameters enabling complete traceability across artifacts.

4. The system and method as claimed in claim 1, wherein the lineage & dependency workflow module (30) establishes a probabilistic dependency graph that links heterogeneous application artifacts into a unified structure, enabling accurate impact analysis and traceability using vector embedding wherein the content of each artifact is represented using modality-specific embeddings; similarity matching that computes cosine similarity scores to detect semantic closeness between artifacts; structural analysis where code is parsed into abstract syntax trees (ASTs) and call graphs to extract explicit dependencies such as imports and API calls; statistical co-change analysis, enables refinement through which leverages git history to compute co-change frequencies and assigns edge weights proportional to the co- change probability; governance overlay; that links artifacts to compliance rules by combining keyword matching with embedding-based alignment; and edge pruning; that applies a confidence threshold (θ) to remove weak links, resulting in a high-confidence, multi-modal dependency graph that integrates technical, semantic, and governance relationships.

5. The system and method as claimed in claim 1, wherein the historical similarity analyzer (40) employs a historical work matching process by representing a goal embedding (g) compared against a library of historical pull request embeddings {p₁, p₂, …}using similarity computations to evaluate the closeness between the current goal and past development work thereby ranking the results as the top-k historical pull requests, finally transferring the associated code modules, test cases, and implementation patterns into the current workflow, enabling efficient reuse of validated solutions.

6. The system and method as claimed in claim 1, wherein the impact analysis engine (50) identifies and prioritizes application components likely to be affected by a given change or goal, enabling proactive risk assessment and efficient planning by applying static analysis techniques, LLM Navigation, a hybrid voting mechanism combining outputs from static analysis, LLM reasoning, and historical co-change statistics into a unified confidence score.

7. The system and method as claimed in claim 1, wherein the macro generation engine (60) translates high-level goals into executable workflows represented as macro graphs in the form of directed acyclic graphs (DAGs) wherein the nodes denote subtasks such as creating a Redis cache layer, refactoring a database query, or generating a test suite, while edges represent dependencies, ensuring proper sequencing (e.g., refactoring must precede testing).

8. Th system and method as claimed in claim 1, wherein the micro compiler of the macro generation engine (60) processes the dependency graph (G) and a set of impacted modules (M) to construct the macro graph using the steps of;
1. decomposing the goal into a set of subtasks {s₁, s₂, …} using a large language model;
2. mapping subtasks to corresponding modules, tests, or monitoring dashboards within the system; and
3. building the DAG, where dependencies are ordered topologically and governance checks are automatically injected as blocking nodes to enforce compliance.

9. The system and method as claimed in claim 1, wherein the orchestrator execution module (70) is configured for executing the macro graph (DAG) through hybrid node types; wherein the orchestrator module handles the deterministic nodes, the generative nodes, or the validation nodes to ensure adherence to organizational and regulatory standards where the execution is iterative such that failed or low-confidence nodes automatically trigger regeneration, alternative path selection, or retry mechanisms.

Documents

Application Documents

# Name Date
1 202521083408-STATEMENT OF UNDERTAKING (FORM 3) [02-09-2025(online)].pdf 2025-09-02
2 202521083408-POWER OF AUTHORITY [02-09-2025(online)].pdf 2025-09-02
3 202521083408-FORM 1 [02-09-2025(online)].pdf 2025-09-02
4 202521083408-FIGURE OF ABSTRACT [02-09-2025(online)].pdf 2025-09-02
5 202521083408-DRAWINGS [02-09-2025(online)].pdf 2025-09-02
6 202521083408-DECLARATION OF INVENTORSHIP (FORM 5) [02-09-2025(online)].pdf 2025-09-02
7 202521083408-COMPLETE SPECIFICATION [02-09-2025(online)].pdf 2025-09-02
8 202521083408-FORM-9 [26-09-2025(online)].pdf 2025-09-26
9 202521083408-FORM 18 [01-10-2025(online)].pdf 2025-10-01
10 Abstract.jpg 2025-10-08