Sign In to Follow Application
View All Documents & Correspondence

System And Method To Create A Timeline Of Events From Organizational Data

Abstract: ABSTRACT: Title: SYSTEM AND METHOD TO CREATE A TIMELINE OF EVENTS FROM ORGANIZATIONAL DATA The present invention provides a system and method to create a timeline of events from organizational data; wherein the system(100) comprises a data ingestion layer(111) with a plurality of data connectors and sources(101, 110), a parsing and preprocessing layer(102) for data normalization and time resolution, and an intent-based topic modelling module(103) utilizing large language models (LLMs) for semantic clustering and extraction of 5W1H elements. An event generator engine (104) synthesizes structured event objects, while a relationship and causality engine (105) establish temporal and logical links among events. An external enrichment module (106) supplements events with contextual information from CRM and public sources. Events are rendered into graph-based timelines by a timeline renderer (107) and presented via a visualization UI (108). A security and governance layer (109) ensures role-based access, anonymization, and audit logging. The method involves sequentially processing, analyzing, linking, enriching, and visualizing data to produce insightful and traceable event timelines.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
11 July 2025
Publication Number
40/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Persistent Systems
Bhageerath, 402, Senapati Bapat Rd, Shivaji Cooperative Housing Society, Gokhale Nagar, Pune - 411016, Maharashtra, India.

Inventors

1. Mr. Nitish Shrivastava
10764 Farallone Dr, Cupertino, CA 95014-4453, United States.

Specification

Description:FIELD OF INVENTION
The present invention relates to organizational data such as market intelligence, workflow management, and orchestration of tools. More specifically, it relates to a system and method for construction of a timeline of events from organisational data, into time-based event map using internal data sources, machine learning models and deterministic tools.

BACKGROUND
Organizational data are increasingly used in various business domains, such as workflow management, sales and marketing, market intelligence, and employee management. It has many use cases, most notably market intelligence as well as the segmentation of sales and marketing prospects. Existing systems use structured organizational data to support operations such as task routing and customer segmentation. In the existing systems, domain-specific structured data is often utilized to provide operational insights, such as process assignment, market targeting, competitive analysis, and HR performance tracking. While these systems are good at handling routine operations and analysing data within specific areas, they fall short when it comes to connecting scattered information from different parts of the organization into one clear, easy-to-understand timeline that shows how decisions were made and how things developed over time. Existing systems are good at automating tasks and analysing data within specific areas, but they remain limited in scope when it comes to correlating fragmented, connecting scattered information from different departments into one clear format.
Prior Arts:
US11816596B2 described herein relate to methods, systems, and computer-readable media to generate an alert or recommendation based on an activity log. In some implementations, a computer-implemented method includes receiving, at a processor, an activity log associated with one or more enterprise sub-processes for a first time period, determining, at the processor, change context data based on a comparison of the entity data and the associated interaction data with corresponding entity data and associated interaction data of a second time period, analyzing the change context data to determine atomic event data, providing the atomic event data associated with the enterprise process as input to a trained machine learning model, determining, using the trained machine learned model and the atomic event data, a predicted future state of the enterprise process, and generating and transmitting an alert or recommendation to one or more data sinks associated with the enterprise process.
202541060334 The invention provides a blockchain-based transparent workflow management system tailored for enterprise-level operations involving multiple stakeholders and complex business processes. A modular workflow engine allows the graphical configuration of processes, while a middleware layer facilitates integration with existing ERP, CRM, and legacy systems. A real-time dashboard offers process analytics and alerts, improving operational efficiency and compliance. The system supports multi-tenancy and AI-powered workflow optimization. By combining decentralization, cryptographic security, and enterprise-grade interoperability, the invention enhances trust, accountability, and scalability in managing enterprise workflows across distributed environments.
Whereas the first aforementioned prior art discloses a system that analyzes structured activity logs to detect changes and predict future states of enterprise processes using machine learning, it is primarily focused on forward-looking alert generation based on predefined sub process data. The second prior art discloses a system that focuses on enhancing workflow transparency and efficiency through blockchain, modular engines, and system integration, it primarily addresses process execution and traceability within predefined workflows. None of the prior arts address the need to reconstruct time-based, cross-domain narratives from both structured and unstructured enterprise data.
Furthermore, though the enterprises generate massive volumes of fragmented data through systems like CRMs, project trackers, internal wikis, support logs, source code repositories, and emails collecting a wealth of information; organizations still struggle to derive coherent, time-based narratives that reveal the evolution of decisions, projects, failures, or successes. There is a need for an intelligent system that connects disparate datasets, understands their context, and transforms them into an interactive, high-resolution timeline to support auditability, root-cause analysis, strategy validation, and forecasting.
To overcome these drawbacks, there is a need for a novel, deterministic, tool orchestration system that can intelligently integrate and interpret scattered enterprise data whether it is structured like spreadsheets or unstructured like emails and organize it into a clear timeline.

DEFINITIONS
The expression “system” used hereinafter in this specification refers to an ecosystem comprising, but is not limited to a system with a user, input and output devices, processing unit, plurality of mobile devices, a display unit and output; and is extended to computing systems like mobile, laptops, computers, PCs, etc.
The expression “data connectors or sources” used hereinafter in this specification refers to internal as well as external sources that enables data collection or retrieval using mobile, laptops, computers, PCs, keyboards, mouse, pen drives, web or stored databases, but is not limited to it.
The expression “visualization user interface (UI)” used hereinafter in this specification refers to, but is not limited to, an onboard output device, a user interface (UI), a display kit, a local display, a screen, a dashboard, or a visualization platform enabling the user to visualize, observe or analyse any data or scores provided by the system.
The expression “processing unit” refers to, but is not limited to, a processor of at least one computing device that optimizes the system.
The expression “large language model (LLM)” used hereinafter in this specification refers to a type of machine learning model designed for natural language processing tasks such as language generation. LLMs are language models with many parameters, and are trained with self-supervised learning on a vast amount of text.
The expression “semantic embeddings” are numerical representations of text, images, or other data types that capture their meaning in a way that allows computers to understand relationships between concepts.
The expression “deterministic tool” used hereinafter in this specification refer to, one that, given the same input, will always produce the same output and follow the same execution path. This predictability is crucial for many applications, including testing, debugging, and achieving reliable performance in critical systems.
The expression “APIs” used hereinafter in this specification refer to an API (Application Programming Interface) is a set of rules and specifications that allows different software systems to communicate and interact with each other.
The expression “SQL” used hereinafter in this specification refer to Structured Query Language, is a programming language specifically designed for managing and manipulating data in relational database management systems.
The expression “AWS S3” used hereinafter in this specification refer to a cloud-based object storage service offered by Amazon Web Services (AWS).
The expression “GCS” used hereinafter in this specification refer to Google Cloud Storage, a cloud-based object storage service offered by Google Cloud; a scalable, secure, and fully managed service for storing and accessing large amounts of unstructured data.
The expression “CRMs” used hereinafter in this specification refer to customer relationship management, a technology that helps businesses manage and analyse customer interactions and data throughout the customer lifecycle.
The expression “parsing” used hereinafter in this specification refer to the process of analysing a sequence of tokens (e.g., characters in a string, words in a sentence) to determine its grammatical structure with respect to a given formal grammar.

OBJECTS OF THE INVENTION:
The primary object of the invention is to provide a system and method to create a timeline of events from organizational data.
Another object of the invention is to connects multiple internal data sources via APIs, agents or database access layers.
Yet another object of the invention is to utilizes tuned LLMs to analyse ingested data and identify high level topics based on intent and subtopic and entities using contextual decomposition.
Yet another object of the invention that applies schema parsers and static analysis tools to understand data formats, record types and known field mappings.
Yet another object of the invention is to provide system that converts semantic clusters and entities into timestamped event objects.
Yet another object of the invention is to provide a system and method that enriches the internal timeline.
Yet another object of the invention is to give feedback loop using relationship mapping based on historical corrections, feedback or newly ingested data.

SUMMARY
Before the present invention is described, it is to be understood that the present invention is not limited to specific methodologies and materials described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention.
The system comprises a multi-layered architecture for converting organizational data into timelines of events. It includes a data ingestion layer with connectors to internal systems and external sources, a parsing and preprocessing layer for data normalization and schema alignment; an intent-based topic modelling module powered by LLMs for semantic understanding; an event generator engine for constructing structured event objects; a relationship and causality engine for establishing links and causal flows; an external enrichment module for pulling relevant external data; a timeline renderer and visualization UI for interactive display; a security and governance layer for access control, anonymization, and audit logs.
The method begins by collecting raw data through connectors from internal and external systems. This data is parsed and normalized for time alignment and tagging. LLM-driven topic modelling and semantic clustering extract key insights, including 5W1H elements. Structured event objects are then created using both semantic and deterministic cues, tagged with identifiers and metadata. These events are enriched with contextual data from external systems, followed by causal analysis to infer parent-child, temporal, or feedback relationships. The events are rendered into a graph-based timeline with options for threaded views and cause-effect animations. A visual UI presents these timelines to users, while the governance layer ensures secure, role-based access and privacy compliance.
The invention enables automatic construction of detailed, interactive timelines from disparate organizational data sources, offering granular and contextual visibility into events. It integrates LLMs and deterministic models to enhance semantic understanding and causality mapping. The system supports intuitive visualizations for pattern detection and root cause analysis. Its modularity ensures easy integration across enterprise tools, while robust governance ensures compliance, security, and privacy, making it suitable for audit, diagnostics, and strategic analysis.

BRIEF DESCRIPTION OF DRAWINGS
A complete understanding of the present invention may be made by reference to the following detailed description which is to be taken in conjugation with the accompanying drawing. The accompanying drawing, which is incorporated into and constitutes a part of the specification, illustrates one or more embodiments of the present invention and, together with the detailed description, it serves to explain the principles and implementations of the invention.
FIG.1. illustrates the structural and functional components of the system.
FIG.2. illustrates the stepwise method of to create a timeline of events from organizational data.

DETAILED DESCRIPTION OF INVENTION:
Before the present invention is described, it is to be understood that this invention is not limited to methodologies described, as these may vary as per the person skilled in the art. It is also to be understood that the terminology used in the description is for the purpose of describing the particular embodiments only and is not intended to limit the scope of the present invention. Throughout this specification, the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps. The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the invention to achieve one or more of the desired objects or results. Various embodiments of the present invention are described below. It is, however, noted that the present invention is not limited to these embodiments, but rather the intention is that modifications that are apparent are also included.
The present invention describes a system and method to create a timeline of events from organizational data, it pertains to transforming unstructured and structured organizational data into a time-based event map using machine learning models and deterministic tools; wherein the system (100) comprises of a data ingestion layer (111) with plurality of data connectors and sources (101) and plurality of external sources (110), a parsing and preprocessing layer (102), an intent based topic modelling (via LLM) (103), an event generator engine (104), a relationship and casualty engine (105), an external event enrichment module (106), a timeline renderer (107), a visualisation user interface (UI) (108), and a security and governance layer (109).
According to an embodiment of the invention, the data connectors and sources (101) interface with project management tools, CRMs, code repositories, documentation systems, support ticketing systems, and web data sources (101); comprising of the standardized APIs for connecting to either internal systems such as Jira, GitHub, Notion, Zendesk, CRM platforms, or to databases such as SQL/NoSQL and cloud storages such as AWS S3, GCS; and to external sources (110) such as salesforce, public news APIs, partner APIs.
According to another embodiment, the parsing and preprocessing layer (102) is configured for data normalization, time field extraction and resolution, metadata tagging, log format converters and schema detectors as exemplified somewhere else in the description. These preprocessing steps ensures that data is clean, structured, and uniformly formatted for subsequent analysis. The parsing and preprocessing layer (102) enables data normalization, that refers to the process of transforming data from different sources into a consistent, standardized format to enable uniform analysis and comparison as exemplified below.
Example: The data normalization can be understood distinctly in an exemplified manner, where data normalization is performed by the system to enable unified analysis and comparison.
Considering various hypothetical situations; the system is configured to:
1. Unifying date formats; it enables converting 07/10/25, 10th July 2025, and 2025-07-10 into YYYY-MM-DD format.
2. Standardizing field names and values; where the system enables mapping different field names like cust_name, client, and customerName to a unified schema, say customer_name.
3. Case normalization; where the system enables making all text lowercase or applying title case as needed.
4. Removing noise; where the system is configured for trimming whitespace, removing escape characters, cleaning HTML tags, etc.
5. Normalizing units; where the system enables converting 10k, 10,000, and ten thousand into the same numeric value.
According to an embodiment, the parsing and preprocessing layer (102) enables time field extraction and resolution for identifying and resolving all references to time (explicit and implicit) in structured and unstructured data, as exemplified below.
Example: The system configures extraction:
From structured fields like created_at, last_updated, etc.
From unstructured text like “on July 10, the deployment failed.”
The system enables resolution, thereby converting relative time expressions like "yesterday", "last Friday", "2 days ago" into absolute timestamps using the document’s context (e.g., creation time).
Handling time zones and differences between systems (e.g., UTC vs. PST).
Further, it allows normalization, where all times are converted to a standard format like ISO 8601 (2025-07-10T14:30:00Z).
According to a next embodiment, the metadata tagging by the parsing and preprocessing layer (102) refers to adding contextual information (metadata) to each data item to make it easier to search, analyze, and correlate, as exemplified below.
Example: Tags can include source systems like Jira, GitHub, Salesforce, etc.
Author or owner selects a data type such as ticket, commit, incident, meeting; entities such as customer name, team, impacted service; priority, status or confidentiality level.
The metadata can be auto-tagged using parsing rules for structured sources and LLM/NLP techniques for unstructured content.
According to a next embodiment, the log format converters of the parsing and preprocessing layer (102) are configured to converting diverse and proprietary log formats into a common schema or readable structure for analysis; wherein the parsing and preprocessing layer (102) logs from different systems including, but not limited to server logs, CRM logs, error logs, which may have:
Different delimiters (e.g. CSV, JSON, Syslog, Apache logs); or
Different fields and ordering
The conversion includes the steps of;
parsing logic for each format (regex, custom parsers, or tools like logstash, fluentd)
mapping fields into a common schema like: timestamp, source, message, severity, etc.
merging the logs after converting and processing them consistently.
According to a next embodiment, the schema detectors of the parsing and preprocessing layer (102) refer to tools or techniques that automatically detect the structure (schema) of incoming data; wherein while ingesting new data sources, the system scans a sample of the data; detects columns/fields, data types (string, number, date), and value patterns; recognizes keys like IDs, timestamps or status codes. It employs techniques such as heuristics where anything with a colon is key:value; ML-based inference enabling pattern-based field classification and using predefined templates for known sources (like Salesforce export formats). This allows the system to auto-adapt to new data sources without requiring manual schema definitions.
According to another embodiment, the intent-based topic modelling (103) uses LLMs to provide prompts designed to detect themes and intents in communication and tickets, generating semantic embeddings and clustering using techniques like KMeans or HDBSCAN, and extraction of 5W1H entities including who, what, why, when, where, how; and the event generator engine (104) takes semantic and deterministic signals to create structured “event” objects, which is a machine-readable, self-contained data unit that captures:
What happened (the action or outcome)
When it happened (timestamp)
Who or what was involved (entities)
Why or how it happened (if known)
Where it came from (source system and reference)
How it links to other events (e.g., caused by X, followed by Y).
According to yet another embodiment, relationship and causality engine (105) uses LLM chains and rule-based systems to establish parent-child relationships (e.g., code fix following incident), a temporal causality, and/or a feedback loops and reversals including rollback events. The external event enrichment module (106) pulls related information from the salesforce including, but not limited to opportunities, accounts, campaigns; web including but not limited to news APIs, press releases, or social sentiments.
According to yet another embodiment, the timeline renderer (107), creates a graph-based timeline with options for threaded views (by intent or owner), cross-correlation maps and cause-effect animations; and are displayed to the user using a visualization UI (108). Further, the security and governance layer (109) provide role-based access to sensitive events, enables an anonymization module and audit logs.
According to yet another embodiment, anonymization module enables anonymization process of altering data so that the individuals or entities cannot be identified, either directly (e.g., names, email addresses) or indirectly (e.g., unique combinations of fields), such that the data still remains useful for analysis, visualization, or sharing. When the system is configured to creating timelines from organizational data, it might ingest fields including, but not limited to customer names, employee emails, ticket submitter names, IP addresses, internal system names or project codes. The anonymization module protects privacy and complies with regulations (like GDPR, HIPAA, or internal enterprise policies), by automatically detecting and anonymizing PII or sensitive fields, supports role-based views, by using techniques like:
- Masking (e.g., replacing names with placeholders)
- Pseudonymization (e.g., replacing names with consistent but fake tokens)
- Tokenization or Hashing (irreversible obfuscation)
- Generalization (e.g., showing “Senior Engineer” instead of exact title).
In a preferred embodiment of the invention, stepwise method to create timeline of events from organizational data includes steps as follows:
- collecting the raw data originating from various external (110) or internal data source (101),
- connecting data connectors (101) to internal systems such as CRM platforms, databases or external sources (110) such as salesforce, public news and partner APIs.
- normalizing and parsing data by the parsing and preprocessing layer (102) using steps like data normalization, time field extraction and resolution metadata tagging, and conversion of various log formats into standardized schema structures,
- using LLMs by the intent-based topic modeling (104) and deterministic structural analysis where custom-designed prompts and natural language processing techniques help detect themes and intents from communication logs and service tickets,
- generating semantic embeddings and clustering which are used to group related topics thereby extracting 5W1H elements such as who, what, why, when, where, and how, enabling a granular and contextual understanding of each data segment,
- synthesizing both semantic and deterministic signals to construct structured event objects by the event generator engine (104), where each event is tagged with a unique ID, timestamp, topic, relevant entities (e.g., customer name, system affected, impact level, and cause), and linked references such as ticket IDs or pull request numbers, ensuring traceability and contextual relevance,
- enriching the events with external contextual data that is external source (110) to make them more comprehensive and actionable,
- processing the enriched events by the relationship and causality engine (105) which uses both LLM chains and rule-based logic to infer connections between events configured to detecting parent-child relationships, temporal causality, and even feedback loops like rollbacks or system reversals, enabling the users to understand what happened; and why and how the events are interrelated,
- feeding the casually linked events to the timeline renderer (107), which creates interactive, graph-based timelines,
- exploring the threaded views by the users, based on intent or ownership enabling the users to visualize cross-correlation maps, and view animations that illustrate cause-and-effect flows using the visualization UI (108) such that the user interface transforms complex event data into intuitive and insightful timelines,
- governing the entire system (100) by a robust security and governance layer (109) which enforces role-based access control to protect sensitive event information, includes anonymization modules for privacy preservation, and maintains detailed audit logs to ensure accountability and compliance with data governance policies.
According to a next embodiment of the invention, the system (100) and method of the invention enables automatic construction of detailed, interactive timelines from disparate organizational data sources, offering granular and contextual visibility into events. It integrates LLMs and deterministic models to enhance semantic understanding and causality mapping. The system (100) supports intuitive visualizations for pattern detection and root cause analysis. Its modularity ensures easy integration across enterprise tools, while robust governance ensures compliance, security, and privacy, making it suitable for audit, diagnostics, and strategic analysis.
While considerable emphasis has been placed herein on the specific elements of the preferred embodiment, it will be appreciated that many alterations can be made and that many modifications can be made in preferred embodiment without departing from the principles of the invention. These and other changes in the preferred embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the invention and not as a limitation.
, Claims:CLAIMS:
We claim,
1. A system and method to create a timeline of events from organizational data; thereby transforming unstructured and structured organizational data into a time-based event map;
wherein the system (100) comprises of a data ingestion layer (111) with plurality of data connectors or sources (101) and external sources (110), a parsing and preprocessing layer (102), an intent based topic modelling module (via LLM) (103), an event generator engine (104), a relationship and casualty engine (105), an external event enrichment module (106), a timeline renderer (107), a visualisation user interface (UI) (108), and a security and governance layer (109);
characterised in that:
method to create timeline of events from organizational data includes steps of;
- collecting the raw data originating from various external (110) or internal data source (101);
- connecting data connectors (101) to internal systems such as CRM platforms, databases or external sources (110) such as salesforce, public news and partner APIs;
- normalizing and parsing data by the parsing and preprocessing layer (102) using steps like data normalization, time field extraction and resolution metadata tagging, and conversion of various log formats into standardized schema structures:
- using LLMs by the intent-based topic modelling module (104) for deterministic structural analysis;
- generating semantic embeddings and clustering which are used to group related topics thereby extracting 5W1H elements such as who, what, why, when, where, and how, enabling a granular and contextual understanding of each data segment;
- synthesizing both semantic and deterministic signals to construct structured event objects by the event generator engine (104);
- enriching the events with external contextual data, that is external source (110) to make them more comprehensive and actionable;
- processing the enriched events by the relationship and causality engine (105) which uses both LLM chains and rule-based logic to infer connections between events configured to detecting parent-child relationships, temporal causality, and even feedback loops like rollbacks or system reversals;
- feeding the casually linked events to the timeline renderer (107) which creates interactive, graph-based timelines;
- exploring the threaded views by the users, based on intent or ownership enabling the users to visualize cross-correlation maps, and view animations that illustrate cause-and-effect flows using the visualization UI (108) such that the user interface transforms complex event data into intuitive and insightful timelines;
- governing the entire system (100) by a robust security and governance layer (109) which enforces role-based access control to protect sensitive event information, includes anonymization modules for privacy preservation, and maintains detailed audit logs to ensure accountability and compliance with data governance policies.

2. The system as claimed in claim 1, wherein the data connectors and sources (101) interface with project management tools, CRMs, code repositories, documentation systems, support ticketing systems, and web data sources (101) including, but not limited to standardized APIs for connecting to internal systems such as Jira, GitHub, Notion, Zendesk, CRM platforms, or to databases such as SQL/NoSQL or cloud storages such as AWS S3, GCS; and to external sources (110) such as salesforce, public news APIs, partner APIs.

3. The method as claimed in claim 1, wherein the deterministic structural analysis enables the custom-designed prompts and natural language processing techniques that help detecting themes and intents from communication logs and service tickets.

4. The method as claimed in claim 1, wherein the structured event objects synthesized by the event generator engine (104) are tagged with a unique ID, timestamp, topic, relevant entities such as customer name, system affected, impact level, and cause, and linked references such as ticket IDs or pull request numbers, ensuring traceability and contextual relevance.

5. The system as claimed in claim 1, wherein the external event enrichment module (106) pulls related information from the salesforce including, but not limited to opportunities, accounts, campaigns; web including but not limited to news APIs, press releases, or social sentiments.

6. The system and method as claimed in claim 1, wherein the anonymization module enables process of altering data so that the individuals or entities cannot be identified, either directly or indirectly, such that the data still remains useful for analysis, visualization, or sharing; which enables protecting privacy and complies with regulations or internal enterprise policies, by automatically detecting and anonymizing PII or sensitive fields, supports role-based views, by using techniques like masking, pseudonymization, tokenization or hashing or generalization.

7. The system and method as claimed in claim 1, wherein the system is configured to creating timelines from organizational data, it might ingest fields including, but not limited to customer names, employee emails, ticket submitter names, IP addresses, internal system names or project codes.

Dated this 11th day of July, 2025.

Documents

Application Documents

# Name Date
1 202521066229-STATEMENT OF UNDERTAKING (FORM 3) [11-07-2025(online)].pdf 2025-07-11
2 202521066229-POWER OF AUTHORITY [11-07-2025(online)].pdf 2025-07-11
3 202521066229-FORM 1 [11-07-2025(online)].pdf 2025-07-11
4 202521066229-FIGURE OF ABSTRACT [11-07-2025(online)].pdf 2025-07-11
5 202521066229-DRAWINGS [11-07-2025(online)].pdf 2025-07-11
6 202521066229-DECLARATION OF INVENTORSHIP (FORM 5) [11-07-2025(online)].pdf 2025-07-11
7 202521066229-COMPLETE SPECIFICATION [11-07-2025(online)].pdf 2025-07-11
8 Abstract.jpg 2025-07-30
9 202521066229-FORM-9 [26-09-2025(online)].pdf 2025-09-26
10 202521066229-FORM 18 [01-10-2025(online)].pdf 2025-10-01