Sign In to Follow Application
View All Documents & Correspondence

Computing System Based Data Platform And Method For Structured Data Package Analysis And Evolution

Abstract: Computing System based Data Platform and Method for Structured Data Package Analysis and Evolution The present invention provides a data platform (500) and a method (1000) for transforming conversational outputs (620) into structured, reusable Adaptive Data Packages (ADPs) (570). The data platform (500), configured within a network-enabled computing device (510), comprises a user interface (550), catalogue (560), and machine learning modules (555) containing data processing routines (610) such as APIs (700), cloud engines (710), and microservices (720). The method (1000) includes selecting ADPs (570) and machine learning modules (600), applying the data processing routines (610) externally without accessing original data sources (540), and generating the conversational outputs (620) using external knowledge bases (830) and ontology modules (840). The conversational outputs (620) are converted into new ADPs (570) and stored for iterative reuse. The data platform (500) ensures modularity, traceability, and scalability while preserving analytical lineage and minimizing computational overhead.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
25 August 2025
Publication Number
36/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

TRIANZ DIGITAL CONSULTING PRIVATE LIMITED
165/2, 1st Floor, Wing B, Kalyani Magnum, Doraisanipalya, Bannerghatta Road, Bangalore South, Karnataka, India – 560076

Inventors

1. Srikanth Rao Manchala
165/2, 1st Floor, Wing B, Kalyani Magnum, Doraisanipalya, Bannerghatta Road, Bangalore South, Karnataka, India – 560076
2. Anil Kumar Gupta
165/2, 1st Floor, Wing B, Kalyani Magnum, Doraisanipalya, Bannerghatta Road, Bangalore South, Karnataka, India – 560076
3. Gaurav Mittal
165/2, 1st Floor, Wing B, Kalyani Magnum, Doraisanipalya, Bannerghatta Road, Bangalore South, Karnataka, India – 560076
4. Meenal Singh
165/2, 1st Floor, Wing B, Kalyani Magnum, Doraisanipalya, Bannerghatta Road, Bangalore South, Karnataka, India – 560076
5. Sumit Kumar
165/2, 1st Floor, Wing B, Kalyani Magnum, Doraisanipalya, Bannerghatta Road, Bangalore South, Karnataka, India – 560076
6. Abhishek Rao
165/2, 1st Floor, Wing B, Kalyani Magnum, Doraisanipalya, Bannerghatta Road, Bangalore South, Karnataka, India – 560076
7. Rahul Pant
165/2, 1st Floor, Wing B, Kalyani Magnum, Doraisanipalya, Bannerghatta Road, Bangalore South, Karnataka, India – 560076

Specification

Description:FIELD OF THE INVENTION
[0001] The present invention relates to the field of data science and enterprise data platforms, specifically to a platform and a method for analytical processing and management of adaptive data packages derived from structured data. The invention further pertains to data platforms configured for contextual analysis, reuse, and transformation of structured data assets/data products/data packages in network-enabled environments.
BACKGROUND FOR THE INVENTION:
[0002] A data platform is a computing system designed to manage, process, and analyze data efficiently across various sources. The data platform operates on a network-enabled computing device that includes key hardware components such as a processor for executing analytical tasks, memory for handling active data and computations, and persistent storage for maintaining datasets and system files. High-speed network interfaces enable seamless connectivity with external data sources and services. These hardware elements work together to support real-time data operations, secure access control, and scalable analytics, forming the foundation for advanced data-driven applications.
[0003] In conventional data platforms or methods, conversational outputs—such as insights derived from user queries or interactions—are typically transient and not preserved in a structured, reusable format. These results in a loss of analytical context, making it difficult to trace, replicate, or build upon previous analyses. Users often need to reprocess raw data to regenerate similar insights, leading to inefficiencies and redundant computation.
[0004] Traditional systems/methods lack mechanisms to encapsulate intermediate analytical/conversation results as formal data entities, which hinder iterative and layered analytics. Analysts (data analyst /user) must repeatedly access and process source data to refine or extend insights, which introduces latency and computational burden.
[0005] US2017330086A describes a data platform that includes a user interface for interacting with structured data, supports the creation and reuse of data packages, and integrates external analytical routines. Analytical outputs can be stored within the platform. However, the platform does not support the generation of new structured data assets derived from analytical or conversational outputs. The absence of mechanisms to transform derived insights into reusable, query able data packages limit the platform’s ability to autonomously expand its data repository based on user interactions.
[0006] IN20194104402 describes a data platform with structured data handling, a user interface, and integration of external analytical routines. The platform enables the creation and reuse of data packages and supports storing analytical outputs. The platform (system) lacks functionality to convert analytical or conversational results into new structured data entities that can be independently queried and reused. This limitation restricts the platform’s capability to support iterative enrichment of the catalogue based on dynamic analytical outcomes.
[0007] IN202421046935 presents a basic data platform that includes a user interface and support for structured data. The data platform offer features for creating new data packages, lacks integration with external analytical routines, and does not support reusable data assets. The platform does not provide a mechanism to derive new structured data packages from analytical or conversational outputs, thereby preventing the platform from supporting continuous refinement and expansion of its data assets through user-driven insights.
[0008] WO2020405434A introduces a data platform with structured data support, a user interface, and capabilities for adding and selecting data packages. The platform integrates external analytical routines and supports reusable data packages. However, the platform does not include functionality to transform analytical or conversational outputs into new structured data packages that can be stored, queried, and reused. This limitation affects the platform’s ability to organically grow its catalogue in response to evolving analytical contexts and user interactions.
[0009] Therefore, there is a need for a data platform or such provisions and a method which overcomes the problems of the prior art.
OBJECTS OF THE INVENTION:
[0010] An object of the present invention is to provide a data platform and method that enables the transformation of conversational outputs into structured, reusable data assets, thereby preserving analytical context and enhancing continuity in data-driven decision-making.
[0011] One more object of the present invention is to eliminate the need for repeated access to raw data by allowing derived insights to be stored and reused, thus reducing computational overhead and improving operational efficiency.
[0012] Another object of the present invention is to support iterative and compound analytics by enabling intermediate analytical results to be encapsulated as formal data entities, facilitating layered and scalable analytical workflows.
[0013] Yet another object of the present invention is to enable dynamic expansion of the data repository based on user interactions and analytical outcomes, thereby improving adaptability and responsiveness to evolving analytical requirements.
[0014] A further object of the present invention is to offer a flexible and extensible data platform architecture that supports multi-modal interfaces, remote access, and integration with external analytical services, enhancing usability across diverse environments and user roles.
SUMMARY OF THE INVENTION:
[0015] The present invention discloses a computing system-based data platform and method for managing, analyzing, and evolving structured data assets derived from conversational and analytical outputs. The invention addresses the problem of analytical context loss and inefficiency in iterative data workflows, where conventional platforms fail to preserve conversational insights as reusable data entities. This results in redundant data processing, lack of traceability, and limited support for compound analytics. The present invention overcomes these limitations by enabling the transformation of conversational outputs into structured, query able Adaptive Data Packages (ADPs), thereby preserving analytical lineage and supporting continuous enrichment of the data repository.
[0016] The data platform comprises a network-enabled computing device configured with a user interface, a catalogue for storing ADPs, and one or more machine learning modules. The ADPs are derived from structured data sources and encapsulate real-time data, embedded queries, and access control mechanisms. The data platform supports multimodal interaction through graphical, voice, AI-generated, and service-based interfaces. The machine learning modules execute analytical routines using APIs, cloud engines, and containerized microservices. The data platform includes dynamic (ad-hoc) and scheduled (non-ad-hoc) query execution modules, a natural language processing module for interpreting user queries, and integration with external knowledge bases and ontologies for semantic enrichment. The conversational output is converted into a new ADP and stored in the catalogue for future reuse.
[0017] The method includes configuring platform components, connecting to structured data sources, generating and storing ADPs, and presenting them via the user interface. The ADPs are loaded into structured in-memory representations and stored in shared memory for real-time access. Upon receiving a conversation, the machine learning model generates a contextual output based on the loaded ADP and predefined instructions. The contextual output is transformed into a new ADP and added to the catalogue. The data platform supports SaaS deployment, remote access, multi-tenant usage, and performance monitoring. Technical advantages of the present invention include reduced computational overhead, enhanced traceability, support for iterative analytics, and the ability to formalize and reuse conversational insights without reprocessing raw data.
BRIEF DESCRIPTION OF DRAWINGS:
[0018] Figure 1 shows a block diagram of a data platform in accordance with the present invention;
[0019] Figures 2,3,4,5,6,7 show various schematic views of various components of the data platform shown in figure 1; and
[0020] Figure 8 shows a flow chart of a method for managing adaptive data packages using the data platform shown in the figure 1.
DETAILED DESCRIPTION OF DRAWINGS:
[0021] In a preferable embodiment of the present invention (Figure 1), a data platform (500) is provided. The data platform (500) is configured in a network-enabled computing device (510). The data platform (500) is deployed within a network-enabled computing device (510), which encompasses various hardware and software components. The data platform (500) may include a cloud server (501), an edge computing node (502), or an IoT gateway (503), each equipped with a network interface module (504) to enable seamless data transmission. The data platform (500) leverages a processing unit (505) for executing analytics workloads, a storage subsystem (506) for managing structured and unstructured data, and a security module (507) for enforcing data protection protocols. For example, a cloud server (501) may ingest telemetry data from IoT sensors via the gateway (503), process it using the analytics engine (not shown), and store the results in the data repository (506a) for further access and visualization. The computing device (510) includes at least one memory (520) and one processor (530), which together enables the execution of platform functionalities. The memory (520) may include volatile memory such as RAM for temporary data storage and non-volatile memory like a flash storage for persistent data retention. The processor (530), which may be a multi-core CPU or a specialized GPU, handles computational tasks including data ingestion, transformation, and model inference, ensuring efficient operation of the data platform (500).
[0022] The data platform (500) is connected to one or more data sources (540), which may include structured databases (541), unstructured data lakes (542), real-time data streams (543), or external APIs (544). Said data sources provide raw input that the data platform (500) ingests, processes, and analyzes to generate actionable insights. For example, a real-time data stream (543) from IoT sensors can be continuously monitored, while historical data from a relational database (541) may be used for trend analysis and model training.
[0023] The data platform (500) includes a user interface (550), a catalogue (560), and one or more machine learning modules (555).
[0024] The user interface (550) (Figure 2) is configured within the memory (520) and executed by the processor (530) to facilitate interaction between the user and the data platform (500). The user interface (550) may include graphical components (551) such as dashboards, charts, and control panels, as well as input modules (552) like forms and command prompts. The dashboards, charts, control panels are rendered using display logic stored in the memory (520), such as UI templates (523), and are dynamically updated based on real-time data processed by the processor (530). For example, the dashboard as the graphical component (551) may visualize sensor metrics retrieved from the data source (540), allowing users to monitor system performance and trigger actions.
[0025] The user interface (550) of the data platform (500) is designed to support multimodal interaction, enabling users to engage with data and analytics through various input and output mechanisms.
[0026] The graphical display (660) provides a visual interface for users to interact with the platform. The graphical display (660) includes dashboards, charts, tables, and interactive elements rendered on screens such as monitors, tablets, or mobile devices. The graphical display allows users to select adaptive data packages (ADPs) (570), configure the machine learning modules, and view conversational outputs in structured formats. For example, a user may view a time-series chart showing monthly revenue trends across regions.
[0027] A voice-based interface (670) enables voice-driven interaction using speech recognition and synthesis technologies. A microphone is required to capture spoken queries, while speakers or headphones are used to deliver audio responses. The voice-based interface (670) supports hands-free operation and is particularly useful in environments where visual interaction is limited. For instance, a user may ask, “What is the forecast for next quarter’s sales?” and receive a spoken summary generated by the platform.
[0028] The AI-generated output interface (680) presents insights and recommendations produced by AI models. The insights may include predictive analytics, anomaly detection, or automated summaries, delivered through text, visuals, or voice. The AI-generated output interface (680) enhances decision-making by providing intelligent, context-aware responses. For example, after analyzing customer sentiment data, the platform may generate a recommendation to adjust marketing strategies in specific regions.
[0029] The service-based input/output interface (690) facilitates integration with external systems and services via APIs, webhooks, or other communication protocols. The service-based input/output interface (690) enables automated data exchange, workflow orchestration, and service triggers. A network connection and appropriate service credentials are required to operate the AI-generated output interface (680). For example, a generated report may be automatically sent to a customer relationship management (CRM) system or trigger a notification in a project management tool.
[0030] The catalogue (560) (Figure 3) is instantiated within the memory (520) and executed by the processor (530) to serve as a centralized repository for managing Adaptive Data Packages (ADPs) (570, 570a, 570b). The ADP (570, 570a, 570b) is derived from the structured data (580) obtained exclusively from the structured databases (540), such as relational database management systems (RDBMS) including SQL-based platforms. The derivation process involves extracting relevant data subsets from structured tables using predefined queries, applying a transformation logic (not shown) to normalize, aggregate, or enrich the data, and encapsulating the result into a self-contained data asset.
[0031] The each ADP (570) (Figure 4) includes three integral components: (i) the derived structured dataset (574), (ii) embedded query logic (571) that defines how the data was extracted and can be re-queried or filtered, and (iii) one or more access control mechanisms (590), such as role-based access control (RBAC) (591), attribute-based access control (ABAC) (592), or token-based authentication (593). These mechanisms (591, 592, and 593) ensure that access to the ADP (570) is governed by user roles, data sensitivity, and contextual policies.
[0032] The catalogue (560) maintains metadata (561) for each ADP (570), including source lineage, transformation history, versioning information, and access logs. The metadata (561) supports traceability, compliance, and auditability. The memory (520) hosts both the ADP content and associated metadata, while the processor (530) manages the execution of data derivation workflows, access control enforcement, and query resolution.
[0033] The user interface (550) (figure 2), configured in the memory (520) and executed by the processor (530), provides visibility into the stored ADPs (570). Through interactive dashboards (551), search utilities (552), and visualization components (553), users can discover, explore, and interact with ADPs (570) based on their access privileges. For example, a user with appropriate permissions may view the ADP (570) representing quarterly sales data derived from a structured database (540), apply filters using the embedded query logic (571), and visualize trends through the user interface (550).
[0034] This architecture ensures that the data platform (500) supports modular, secure, and reusable data assets derived from structured sources, enabling efficient data consumption across analytical and operational workflows.
[0035] The data platform (500) incorporates one or more machine learning modules (600), which function as modular containers for executing analytical operations on structured data assets. Each machine learning module (555) is configured within the computing device (510), utilizing memory (520) for temporary and persistent storage and processor (530) resources for execution. The machine learning modules are designed to operate on Adaptive Data Packages (ADPs) (570) stored in the catalogue (560), enabling targeted analysis and transformation of derived structured data (580) from structured databases (540).
[0036] Contained within each machine learning module (555) are one or more data processing routines (610), which define specific computational workflows. The data processing routines (610) may include operations such as a data cleansing (611), a normalization (612), an aggregation (613), statistical computations (614), a predictive modeling (615), and a transformation logic (616). Each of data processing routines (610) is implemented as a configurable and reusable module, capable of being triggered manually or automatically based on system events or user-defined parameters. Execution of the data processing routines (610) is managed by the processor (530), which allocates computational resources and ensures efficient processing across multiple machine learning modules (555).
[0037] A metadata (601) associated with each machine learning module (555) includes routine definitions, execution history, input-output mappings, and performance metrics. The metadata (601) is stored in the memory (520) and used to monitor, audit, and optimize analytical workflows. Results generated by the data processing routines (610) are made accessible through the user interface (550), allowing users to visualize outputs, compare analytical scenarios, and export insights for further use. This architecture supports scalable, modular, and automated analytics within the data platform (500), enhance its ability to derive actionable intelligence from the structured data sources (540) in a secure and efficient manner.
[0038] The data processing routines (610) of the data platform (500) are designed to execute analytical operations on adaptive data packages (ADPs) (570) using a modular and scalable architecture. The data processing routines (610) include third-party APIs (700), cloud-based analytics engines (710), and containerized microservices (720), each contributing distinct capabilities to the platform’s analytical framework.
[0039] The third-party APIs (700) enable integration with external analytical services, machine learning models, or domain-specific engines. The third-party APIs (700) may include financial forecasting tools, sentiment analysis services, or geospatial processors. API keys, authentication protocols, and network connectivity are required to access and execute these routines. For example, the third-party API (700) may be used to enrich customer data with external demographic insights before generating a conversational output.
[0040] The cloud-based analytics engines (710) (figure 5) provide scalable computing resources for executing complex data transformations, statistical modeling, and predictive analytics. The cloud-based analytics engines (710) operate on cloud infrastructure and support distributed processing, enabling high-performance analysis without local resource constraints. Examples include using a cloud-based engine to run time-series forecasting on sales data or clustering customer segments based on behavioral patterns.
[0041] The containerized microservices (720) offer isolated, portable, and version-controlled environments for executing specific data routines. The containerized microservices (720) are deployed in containers, allowing consistent execution across different environments and easy orchestration. Each of the containerized microservices (720) may perform a specialized task such as data normalization, anomaly detection, or report generation. Container orchestration platforms like Kubernetes may be used to manage these services.
[0042] All of the data processing routines (610) are version-controlled, ensuring traceability and reproducibility of analytical operations. Version control allows tracking of changes, testing of updates, and comparison of outputs across different routine versions. Additionally, rollback support enables restoration to previous states in case of errors, performance issues, or undesired outcomes.
[0043] The stored ADPs (570) are visible through the user interface (550). Each ADP (570) is derived from the structured data (580) of the data sources (540). The structured data (580) includes organized datasets such as tables, key-value pairs, or JSON objects that can be directly consumed by other systems or used for further analysis. Said format is ideal for numerical summaries, filtered records, or query results. For example, a query requesting “monthly revenue by region” may return a structured table with region-wise figures.
[0044] The ADP (570) is a reusable, self-contained data asset integrating real-time data with a query (q) and one or more access control mechanisms (590). The stored ADPs (570) are visible through the user interface (550).
[0045] Through the user interface (550), one or more ADP (570) of the listed ADP (570) in the catalogue (560) is selected. This selection can be made by a user or by AI agent or by a service request. The selected ADP (570) is loaded into a structured in-memory data representation (not shown) within the computing device (510). The structured in-memory data representation is a format in which data is organized and stored directly in the memory(520), rather than on disk. The structured in-memory data representation includes Tabular or columnar structures for efficient querying and computation, Schema definitions to maintain data consistency and Indexing or metadata for fast lookup and filtering.
[0046] Further the resulting data is stored into a shared memory (not shown) through the user interface (550). More specifically, transformed Adaptive Data Packages (ADPs) (570) is stored in a centralized shared memory (not shown). The user interface (550), which acts as the interaction layer between the user and the data platform (500), allowing for seamless data flow and control.
[0047] The Shared memory is a volatile, high-speed memory region within the computing device (510) that is accessible by multiple components, including the processor (530), the machine learning model (555), and other modules. The Shared memory typically includes structured data containers, indexing mechanisms, and access control protocols to ensure efficient and secure data exchange. By storing data in shared memory, the data platform (500) enables real-time collaboration between components, reduces latency, and supports dynamic updates to the catalogue (560).
[0048] Upon receiving a conversation related to the loaded ADP, the MLM (555) generates a conversation output based on the conversation and predefined instructions. When a user initiates a conversation through the user interface (550), and that conversation pertains to a previously loaded Adaptive Data Package (ADP) (570), the Machine Learning Model (555) processes the input. The MLM (555), configured in the memory (520) and executed by the processor (530), uses both the semantic context of the conversation and a set of predefined instructions or prompts to interpret the user's intent and generate a relevant output.
[0049] The conversational output (620) is produced by leveraging the structured in-memory representation of the ADP (570), which includes real-time data, queries, and access control mechanisms (590). The MLM (555) applies natural language understanding techniques to extract meaning from the user's input, aligns it with the ADP's context, and uses predefined logic or templates to formulate a coherent and contextually accurate response. The conversational output (620) may include insights, summaries, recommendations, or transformed data, which can then be converted into a new ADP (570) and stored back into the catalogue (560) for future reuse.
[0050] The Unstructured content (780) refers to free-form text, such as summaries, explanations, or narrative insights. This format is useful for descriptive outputs, contextual interpretations, or natural language responses. For instance, a sentiment analysis routine may produce a paragraph summarizing customer feedback trends.
[0051] The Visual display elements (790) include charts, graphs, heatmaps, and other graphical representations that enhance interpretability and engagement. A display screen or graphical rendering engine is required to present these elements. For example, a line chart showing sales growth over time may be generated as part of the conversational output.
[0052] The Voice-based responses (800) are enabled through speech synthesis technologies and delivered via audio output devices such as speakers or headphones. These responses are particularly useful in hands-free environments or accessibility-focused applications. For example, a voice summary of operational metrics may be provided in a warehouse setting.
[0053] The Executable code snippets (810) consist of small blocks of code, scripts, or queries that can be run in compatible environments to reproduce or extend the analysis. These snippets may include SQL queries, Python scripts, or configuration templates. For example, a data transformation routine may output a Python snippet for custom filtering.
[0054] The contextuality of the conversational output (620) within the data platform (500) is enriched through semantic augmentation mechanisms that integrate the output with one or more external knowledge bases (830) or ontology modules (840) (Figure 6). These components are operatively connected to the platform and serve to enhance the relevance, depth, and interpretability of the generated insights before transformation into the adaptive data package (570).
[0055] The External knowledge bases (830) may include domain-specific repositories, curated datasets, or public semantic resources such as industry taxonomies, regulatory frameworks, or encyclopedic databases. T The External knowledge bases (830) provide factual grounding and contextual references that help refine the conversational output. For example, when generating insights related to pharmaceutical sales, integration with a medical knowledge base can ensure that drug classifications, therapeutic categories, and compliance guidelines are accurately reflected in the output.
[0056] The Ontology modules (840) contribute structured semantic relationships between concepts, entities, and attributes relevant to the data domain. The Ontology modules (840) define hierarchies, associations, and constraints that guide the interpretation of data and queries. For instance, in a retail analytics context, an ontology module may define relationships between product categories, seasonal trends, and customer segments, allowing the platform to generate more meaningful and context-aware responses.
[0057] The enrichment process occurs prior to the transformation of the conversational output (620) into the adaptive data package (570), ensuring that the resulting ADP (570) is semantically aligned with the domain context and user intent. This process enhances the quality and usability of the ADP (570) for downstream analytics, sharing, or reuse.
[0058] The conversational output (620) (Figure 7), generated by applying data processing routines (610) from the selected machine learning module (555) on the selected Adaptive Data Package (ADP) (570), is converted into a new ADP (570) and added to the catalogue (560) through the user interface (550). This conversion process is initiated via the user interface (550), which provides an option to persist in the output as a reusable data asset. The user interface (550) may include a "Save as ADP" function (551), allowing users to define metadata such as ADP name, description, tags, and access permissions before committing the output to the catalogue (560).
[0059] The conversion mechanism involves a transformation module (621), executed by the processor (530), which serializes the conversational output (620) into a structured format compatible with the ADP schema. The conversion mechanism includes encapsulating the output data (622), embedding the query logic (623) that generated the output, and associating access control mechanisms (590) such as role-based access control (591) or token-based authentication (593). The resulting ADP (570) is then stored in the memory (520) and indexed within the catalogue (560) alongside existing ADPs, with the metadata (561) capturing the lineage, source ADP reference, applied routines, and timestamp.
[0060] The newly created ADP (570) becomes immediately available for selection and reuse through the user interface (550). Users can apply further the analytical routines (610) from other machine learning modules (555), chain multiple outputs, or export the ADP (570) for external use. This recursive capability enables the data platform (500) to support iterative analytics, where conversational insights evolve into structured, data assets. For example, a conversational output (620) summarizing customer churn predictions can be saved as the ADP (570), and later used to run segmentation routines (610) to identify high-risk customer groups.
[0061] The data platform (500) promotes modularity, traceability, and reusability within the data platform (500). By allowing conversational outputs (620) to be formalized as ADPs (570), the data platform (500) supports a closed-loop analytical workflow where insights are not only consumed but also preserved and extended. All operations occur without re-ingesting data from the original structured databases (540), ensuring that the platform remains self-contained, secure, and efficient.
[0062] In embodiment (500a) (Figure 1) of the data platform (500), the user interface (550a) includes a natural language processing (NLP) module (630) configured to receive and interpret natural language queries. The NLP module (630) converts user-entered text into structured commands and supports contextual understanding for dynamic selection of adaptive data packages (ADPs) (570) and the analytical routines (610). For example, a query like “Summarize last quarter’s revenue by region” is parsed and routed to relevant data assets. The user interface (550a) includes all configurations of the user interface (550).
[0063] In embodiment (500b) (Figure 7) of the data platform (500), the conversational output (620), generated from analytical routines applied to ADPs (570), is optionally published to the data marketplace (640) via the user interface (550). The data marketplace (640) serves as a centralized repository for sharing insights across users or tenants. Upon publication, the conversational output (620) is saved in the memory (520) for persistent access.
[0064] In embodiment (500c) (Figure 1), the data platform (500c) is deployed as the Software-as-a-Service (SaaS) (650), enabling remote access and multi-tenant usage. The SaaS deployment (650) utilizes cloud infrastructure such as AWS, Azure, or GCP, with backend services hosted on scalable computer instances (e.g., EC2, Azure VMs) and managed via Kubernetes or Docker Swarm. The user interface (550) is built using frameworks like React, Angular, or Vue.js, and backend services are developed in Python, Java, or Node.js. The catalogue (560) and ADPs (570) are managed using cloud-native databases like Amazon RDS, MongoDB Atlas, or Google Big Query, with the access control mechanisms (590) enforced via IAM services. The multi-tenancy is achieved through logical isolation, and secure access is provided via HTTPS endpoints with authentication protocols such as OAuth 2.0 or SAML.
[0065] In embodiment (500d) (figure 3) of the data platform (500), the data platform (500d) has the monitoring module (820) configured to track usage metrics of ADPs (570) and the machine learning modules (555) for performance optimization. Implemented as a background microservice, the module (monitoring module (820)) collects telemetry data such as ADP access frequency, routine execution time, and resource utilization using tools like Prometheus, Grafana, or Datadog. Logs and events are captured using ELK Stack (Elasticsearch, Logstash, Kibana) or Fluentd and stored in cloud-based storage systems like Amazon S3 or Azure Blob Storage. The data is analyzed to identify bottlenecks, optimize resource allocation, and recommend caching or pre-processing strategies. The monitoring module (820) operates on virtualized infrastructure and integrates with the SaaS deployment (650), ensuring tenant-level isolation and compliance.
[0066] The data platform (500) includes a dynamic query execution module (DQEM) (not shown) configured to process user-defined queries interactively and in real-time. The dynamic query execution module is integrated within the memory (520) and executed by the processor (530), enabling users to issue custom queries through the user interface (550) without requiring pre-defined templates or static query structures. The DQEM interprets the query context, accesses relevant ADPs (570) from the catalogue (560), and executes the query against the structured in-memory representation or shared memory to deliver immediate results. For example, a user might ask, “Show me the top 5 regions with highest migration success rates,” and the module dynamically parses and executes this request using the real-time data embedded in the ADP (570). This capability supports ad-hoc level conversation, where queries are spontaneous, context-driven, and tailored to the user's immediate analytical needs.
[0067] The platform (500) includes a scheduled query execution module (SQEM) (not shown) that enables the system to run predefined queries automatically without the user intervention. The scheduled query execution module (SQEM) is configured in the memory (520) and executed by the processor (530), and supports both time-based triggers (e.g., hourly, daily, or weekly schedules) and event-based triggers (e.g., when a new ADP (570) is added or updated). For instance, the data platform (500) may be configured to run a data quality check every night at midnight or to generate a compliance report whenever a new data source (540) is connected. Unlike ad-hoc querying, which is spontaneous and user-driven, this module supports non-ad-hoc operations that are predefined, repeatable, and automated, ensuring consistency and reliability in data monitoring and reporting.
[0068] In one more embodiment of the present invention, a method (1000) for managing adaptive data packages (ADPs) (570) using the data platform (500) is provided.
[0069] The method (1000) starts at step 1000a.
[0070] At step 1010, the method (1000) begins by configuring the user interface (550), the catalogue (560), and the machine learning models (555) within the memory (520) of the network-enabled computing device (510). Said components are executed by the processor (530) and form the core functional modules of the data platform (500), enabling user interaction, data storage, and intelligent processing.
[0071] At step 1020, the computing device (510) is connected to one or more external data sources (540) via the network interface. The connection of step 1020 allows the data platform (500) to access the structured data (580) in real time, which serves as the input for generating adaptive data packages (ADPs) (570).
[0072] At step 1030, one or more ADPs (570) are generated from the structured data (580) obtained from the data sources (540). Each ADP (570) is designed as a reusable, self-contained data asset that integrates real-time data, a query, and one or more access control mechanisms (590), ensuring secure and contextual data encapsulation.
[0073] At step 1040, the generated ADPs (570) are stored in the catalogue (560), which is configured in the memory (520). The catalogue (560) acts as a persistent repository, allowing the data platform (500) to manage and retrieve ADPs efficiently for future operations.
[0074] At step 1050, the stored ADPs (570) are presented to the user via the user interface (550). The step 1050 enables users to browse, select, and interact with available ADPs, facilitating intuitive access to complex data assets.
[0075] At step 1060, one or more selected ADPs (570) are loaded into the structured in-memory data representation through the user interface (550). This representation of step 1060 organizes the ADP (570) in a format optimized for fast, in-memory processing, typically using tabular or columnar structures with schema definitions and indexing.
[0076] At step 1070, the resulting data from the in-memory representation is stored into the shared memory store through the user interface (550). The shared memory (not shown)) provides a volatile, system-wide accessible space that allows multiple components to access and manipulate the data concurrently with minimal latency.
[0077] At step 1080, a conversation related to the loaded ADP (570) is received through the user interface (550). This input of step 1080 may include natural language queries, instructions, or feedback from the user, which are contextually linked to the ADP content.
[0078] At step 1090, the machine learning model (555) processes the received conversation using natural language understanding techniques. The step 1090 references the loaded ADP (570) and applies predefined instructions or prompts to generate a conversational output (620), which may include insights, summaries, or transformed data relevant to the user's query.
[0079] At step 1100, the conversational output (620) generated by the MLM (555) is converted into a new ADP (570). The conversion of step 1100 encapsulates the output into the same reusable, self-contained format, maintaining consistency with existing ADPs.
[0080] At step 1110, the newly converted ADP (570) is added to the catalogue (560) through the user interface (550), making the newly converted ADP (570) available for future access, reuse, or further interaction within the data platform (500).
[0081] The method (1000) ends at step (1000b).
[0082] The data platform (500), deployed in a network-enabled computing device (510), enables users to interact via the multi-modal user interface (550) configured in the memory (520). Through the user interface (550), users access the catalogue (560) including the adaptive data packages (ADPs) (570), each derived from the structured data (580) of the connected data sources (540) and embedded with the access control mechanisms (590). The user interface (550) supports the graphical displays, the voice inputs, the AI-generated outputs, and the service-based I/O, allowing flexible interaction.
[0083] Users select the one or more ADPs (570) and the machine learning modules (600) through the interface (550). Each module (555) includes the data processing routines (610), including the third-party APIs (700), the cloud-based analytics engines (710), or containerized microservices (720), which are version-controlled (now shown) and support rollback (not shown Selection may be manual or automated via AI modules (not shown, machine learning models (555), or service triggers (not shown). The data platform (500) applies routines externally to ADPs (570), without ingesting new data from data sources (540), to generate the conversational output (620).
[0084] The conversational output (620), which may include the structured data (580), unstructured content (780), visual display elements (790), voice-based responses (800), or executable snippets (810), is enriched using external knowledge bases (830) or ontology modules (840). The conversational output (620) is converted into a new ADP (570) and added to the catalogue (560) via the user interface (550). The conversational output (620) is stored in the memory (520), optionally published to the data marketplace (640), and monitored by the monitoring module (820) for usage metrics. The data platform (500) is deployed as Software-as-a-Service (SaaS) (650), supporting remote access and multi-tenant usage.
[0085] The data platform (500) configured in the network-enabled computing device (510) provides a technically efficient framework for managing and reusing analytical context. By transforming the conversational outputs (620) into the structured adaptive data packages (ADPs) (570), which integrate real-time data, queries, and the access control mechanisms (590), the data platform (500) eliminates the need to repeatedly access and process raw data from data sources (540). This approach ensures continuity, traceability, and formalization of derived insights, while maintaining the data platform’s (500) efficiency through the memory (520) and the processor (530) operations without requiring fresh data ingestion.
[0086] The data platform (500) supports iterative and compound analytics by allowing selection of the existing ADPs (570) and external application of the data processing routines (610) from the machine learning modules (555). The data processing routines (610) include third-party APIs (700), cloud-based analytics engines (710), and containerized microservices (720) are version-controlled (730) and support rollback (740), enabling reproducibility and controlled evolution of analytical logic. The enriched conversational outputs (620), integrated with the external knowledge bases (830) or ontology modules (840), can be reused as inputs for further analysis, supporting dynamic chaining of analytical steps.
[0087] The method (1000) enhances computational efficiency by operating on the pre-structured ADPs (570) stored in the catalogue (560), avoiding repeated ingestion from the data sources (540). This reduces latency and system’s load, while ensuring consistent analytical context across sessions. The transformation of conversational output (620) into new ADPs (570) enables cyclical enrichment of the catalogue (560), contributing to a continuously evolving repository of reusable data assets.
[0088] The data platform (500) incorporates a natural language processing (NLP) module (630) within the user interface (550), allowing interpretation of natural language queries and generation of multimodal outputs, including the structured data (580), the unstructured content (780), the visual display elements (790), the voice-based responses (800), and the executable code snippets (810). Said multimodal outputs are stored in the memory (520), published to the data marketplace (640), and monitored via the monitoring module (820) for usage metrics, supporting performance optimization.
[0089] Deployment as Software-as-a-Service (SaaS) (650) enables remote access, multi-tenant usage, and scalable integration across enterprise environments. Automation of the ADP (570) and the machine learning module (555) selection through AI modules), machine learning models, or service triggers further enhance adaptability and operational efficiency, making the platform suitable for dynamic, large-scale analytical ecosystems. , Claims:We Claim:
1) A data platform (500) configured in a network-enabled computing device (510), the network-enabled computing device (510) having at least one memory (520) and one processor (530) and connected to one or more data sources (540), the data platform (500) comprising:
a user interface (550) configured in the memory (520) and executed by the processor (530);
a catalogue (560) configured in the memory (520) and executed by the processor (530) and configured to store one or more adaptive data packages (ADPs) (570), the stored ADPs (570) are visible through the user interface (550), wherein each ADP (570) is derived from structured data (580) of the data sources (540) and is a reusable, self-contained data asset integrating real-time data with a query and one or more access control mechanisms (590), the stored ADPs (570) being visible through the user interface (550);
one or more machine learning model (555) configured in the memory (520) and executed by the processor (530);
wherein upon loading the one or more Adaptive Data Packages (ADPs) (570) into a structured in-memory data representation and storing the resulting data into a shared memory through the user interface (550) and upon receiving a conversation related to the loaded ADP, the MLM (555) generates a conversation output based on the conversation and predefined instructions;
wherein the conversational output (620) is converted to the ADP (570) and added to the catalogue (560) through the user interface (550).
2) The data platform (500a) as claimed in claim 1, wherein the user interface (550a) includes a natural language processing (NLP) module (630) for enabling receipt and processing of natural language queries; and the conversational output (620) is published in a data marketplace (640) of the data platform (500) and saved in the memory (520).
3) The data platform (500c) as claimed in claim 1, wherein the platform is deployed as Software-as-a-Service (SaaS) (650), enabling remote access and multi-tenant usage over a network.
4) The data platform (500) as claimed in claim 1, wherein the user interface (550) comprises one or more of: a graphical display (660), a voice-based interface (670), an AI-generated output interface (680), a service-based input/output interface (690), or any combination thereof.
5) The data platform (500) as claimed in claim 1, wherein the conversational output (620) comprises one or more of: the structured data (580), unstructured content (780), visual display elements (790), voice-based responses (800), or executable code snippets (810).
6) The data platform (500d) as claimed in claim 1, wherein the platform (500d) includes a monitoring module (820) configured to track usage metrics of ADPs (570) and analytical sections (600) for performance optimization.
7) The data platform (500) as claimed in claim 1, wherein the contextuality of the conversational output (620) is enriched by integrating the conversational output (620) with one or more external knowledge bases (830) or ontology modules (840) operatively connected to the data platform (500), thereby enhancing semantic relevance prior to transformation thereof into an adaptive data package (570).
8) The data platform (500) as claimed in claim 1, wherein the data platform (500) comprises: a dynamic query execution module configured to process user-defined queries interactively and in real-time.
9) The data platform (500) as claimed in claim 1, wherein the data platform (500) comprises a scheduled query execution module configured to execute predefined queries automatically based on time-based or event-based triggers.
10) A method (1000) for managing adaptive data packages (ADPs) (570) in a network-enabled computing device (510) comprising at least one memory (520) and one processor (530), the method (1000) comprising steps of:
configuring a user interface (550), a catalogue (560) and one or more machine learning module (555) in the memory (520) and executed by the processor (530);
connecting the network enabled computing device (510) with one or more external data sources (540) via the user interface (550);
generating one or more adaptive data packages (ADPs) (570) from structured data of the data sources (540), each ADP (570) being a reusable, self-contained data asset comprising real-time data, a query, and one or more access control mechanisms (590);
storing the generated ADPs (570) in the catalogue (560);
presenting the stored ADPs (570) via the user interface (550);
loading one or more stored ADPs (570) into a structured in-memory data representation through the user interface (550);
storing the resulting data into a shared memory store through the user interface (550);
receiving a conversation related to the loaded ADP (570) through the user interface (550);generating a conversational output (620) based on the received conversation, the loaded ADP (570) and predefined instructions by a machine learning model (555);
converting the conversational output (620) into an ADP (570); and adding the converted ADP (570) to the catalogue (560) through the user interface (550).

Documents

Application Documents

# Name Date
1 202541080259-STATEMENT OF UNDERTAKING (FORM 3) [25-08-2025(online)].pdf 2025-08-25
2 202541080259-REQUEST FOR EXAMINATION (FORM-18) [25-08-2025(online)].pdf 2025-08-25
3 202541080259-REQUEST FOR EARLY PUBLICATION(FORM-9) [25-08-2025(online)].pdf 2025-08-25
4 202541080259-POWER OF AUTHORITY [25-08-2025(online)].pdf 2025-08-25
5 202541080259-FORM-9 [25-08-2025(online)].pdf 2025-08-25
6 202541080259-FORM 18 [25-08-2025(online)].pdf 2025-08-25
7 202541080259-FORM 1 [25-08-2025(online)].pdf 2025-08-25
8 202541080259-DRAWINGS [25-08-2025(online)].pdf 2025-08-25
9 202541080259-DECLARATION OF INVENTORSHIP (FORM 5) [25-08-2025(online)].pdf 2025-08-25
10 202541080259-COMPLETE SPECIFICATION [25-08-2025(online)].pdf 2025-08-25
11 202541080259-Proof of Right [15-11-2025(online)].pdf 2025-11-15
12 202541080259-FORM-26 [15-11-2025(online)].pdf 2025-11-15