Abstract: The present disclosure provides an API-based forecasting system and method. The system may integrate with a user interface and receive data for forecasting via the user interface from the users. Upon receiving the data from the users, the system may pre-process the data and normalize the data into a suitable format. The system may use the pre-processed and normalized data to train a forecasting model. The forecasting model may generate predictions for future and send the generated predictions back to the users through a response. The users may receive the generated predictions and may perform necessary post-processing or analysis on the results within their own systems or applications. This may involve visualizing the generated predictions, comparing the generated predictions with actual data, calculating forecast accuracy metrics, or incorporating the generated predictions into downstream decision-making processes, etc. Fig. 3
FORM 2
THE PATENTS ACT, 1970
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
FORECASTING
APPLICANT
JIO PLATFORMS LIMITED
of Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India; Nationality: India
The following specification particularly describes
the invention and the manner in which
it is to be performed
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF INVENTION
[0002] The present disclosure generally relates to a wireless
telecommunications network. More particularly, the present disclosure relates to an Application Programming Interface (API)-based forecasting system and method.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally
intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
[0004] A forecasting system is a tool or framework used to predict future
outcomes based on historical data, patterns, and relevant factors.
[0005] Machine learning (ML) algorithms are a set of computational
techniques that enable the systems to learn from and make predictions or decisions based on data without being explicitly programmed. The machine learning (ML) algorithms enable the systems to identify patterns, extract insights, and make predictions or decisions without being explicitly programmed to perform specific
tasks.
[0006] An Autoregressive Integrated Moving Average (ARIMA) is a
statistical analysis model that uses time series data to either better understand the data set or to predict future trends.
[0007] An exponential smoothing is a broadly accurate forecasting method
for short-term forecasts. The technique assigns larger weights to more recent observations while assigning exponentially decreasing weights as the observations get increasingly distant.
[0008] A Prophet is an open-source forecasting tool designed to make time
series forecasting. The prophet is an additive regression model with a piecewise linear or logistic growth curve trend.
[0009] A Long Short-Term Memory (LSTM) is an artificial recurrent neural
network used in deep learning and can process entire sequences of data. Due to the model's ability to learn long term sequences of observations, the LSTM has become a trending approach to time series forecasting.
[0010] A forecasting engine is a software component or system designed to
automate the process of generating forecasts from historical data. It typically encompasses algorithms, models, and computational techniques to analyze past trends and patterns in data and extrapolate them into the future.
[0011] A forecasting model is a statistical tool designed to predict future
trends and outcomes based on historical data. It involves analyzing past patterns and trends to make informed predictions about future outcomes.
[0012] An Application Programming Interfaces (API)-based forecasting
engine utilizes Application Programming Interfaces (APIs) to provide forecasting capabilities to external applications or systems. The API-based forecasting engine
typically exposes a set of endpoints or functions through which users can submit data and receive forecasts or predictions in return.
[0013] An Application Programming Interfaces (API) integration refers to
the process of connecting different software systems or applications through their Application Programming Interfaces (APIs) to enable them to communicate and share data with each other.
[0014] A queuing and processing protocol helps in managing the flow of
data packets efficiently and ensuring low latency, high throughput, and reliable communication.
[0015] A hypertext transfer protocol (HTTP) is the foundation of data
communication on the world wide web. It is an application layer protocol used for transmitting hypermedia documents over the internet.
[0016] A Maximum likelihood estimation (MLE) is a statistical method
used to estimate the parameters of a statistical model. The MLE is often employed to determine the parameters of a probabilistic model that best fit the observed data in the forecasting models.
[0017] A gradient descent is a fundamental optimization algorithm used in
machine learning and optimization tasks, including forecasting algorithms. The gradient descent can be utilized for training the parameters of a predictive model to minimize the difference between predicted and actual values in the forecasting models.
[0018] A backpropagation is applicable in forecasting models for time
series forecasting. In time series forecasting, future values of a variable are predicated based on its past values. Backpropagation further allows to learn from the discrepancies between its predictions and the actual data, adjusting its weights to minimize the error over time. This iterative learning process enables the network
to capture the underlying patterns in the time series data and make accurate forecasts.
[0019] API-URL is used to access and manipulate URLs. URL stands for
Uniform Resource Locator. A URL is a unique address which is pointing to the resource. The URL is used to make HTTP requests for making interactions with the API and for getting or sending data.
BACKGROUND OF THE INVENTION
[0020] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admission of the prior art.
[0021] Conventional systems of developing forecasting models or relying
on external tools face several challenges, which results in a significant barrier to entry for users who want to incorporate forecasting capabilities into their systems. Conventionally, the forecasting models are developed from scratch, which may be a time-consuming process, as it involves tasks such as data pre-processing, feature engineering, algorithm selection, model training, and evaluation. Creating accurate and effective forecasting models may demand a deep understanding of machine learning (ML) algorithms, statistical techniques, and domain knowledge. Also, the conventional systems may develop the forecasting models by requiring computational resources, including powerful hardware and software tools. Users may need to invest in an expensive infrastructure or licenses for specialized forecasting software, making it costly for smaller organizations or individuals.
[0022] Further, in conventional systems, there are numerous algorithms
available, each with its own strengths and limitations, and choosing a right
algorithm for a forecasting task may be challenging. After developing the forecasting model, the forecasting model requires ongoing maintenance and updates. As new data becomes available or the forecasting requirements change, the users need to adapt their models accordingly. This may be a time-consuming task, requiring continuous monitoring, retraining, and refinement of the models. Also, integration of the forecasting models into existing systems or applications may be complex. Users often face challenges in seamlessly integrating their models with other components, data sources, or workflows. This may lead to compatibility issues and delays in deploying the forecasting solution.
[0023] There is, therefore, a need in the art to provide an improved system
and method that can mitigate the problems associated with the prior arts.
SUMMARY
[0024] In an exemplary embodiment, an application programming interface
(API)-based forecasting system comprising a forecasting engine is described. The forecasting engine comprises a receiving unit configured to receive data for forecasting from a user via an API and at least one of plurality of algorithms selected by the user to apply to the received data. A processing unit is configured to pre-process the received data and train at least one of plurality of forecasting models based on the at least one selected algorithm from plurality of algorithms and the pre-processed received data. The at least one trained forecasting model of plurality of forecasting models is configured to generate a plurality of predictions. A sending unit is configured to send the generated plurality of predictions to the user through a response.
[0025] In some embodiments, the predictions are post-processed. The post-
processing includes visualizing the predictions, comparing the predictions with actual data, calculating forecast accuracy metrics, or incorporating the predictions into downstream decision-making processes.
[0026] In some embodiments, the pre-processing of the received data
includes handling missing values, normalizing the data, scaling the data, transforming the data into a format for the selected algorithm.
[0027] In some embodiments, training of the one of plurality of forecasting
models further includes a plurality of optimization methods. The plurality of optimization methods includes a statistical estimation, a gradient descent, or a backpropagation.
[0028] In some embodiments, the response comprises a link.
[0029] In another exemplary embodiment, a method for performing an
application programming interface (API)-based forecasting by a forecasting engine is described. The method comprises receiving data for forecasting from a user via the API and at least one of a plurality of algorithms selected by the user to apply to the received data. The method comprises pre-processing the received data and training at least one of plurality of forecasting models based on the at least one selected algorithm from plurality of algorithms and the pre-processed received data. The at least one trained forecasting model of plurality of forecasting models is configured to generate a plurality of predictions. The method further comprises sending the generated plurality of predictions to the user through a response.
[0030] In some embodiments, the predictions are post-processed. The post-
processing includes visualizing the predictions, comparing the predictions with actual data, calculating forecast accuracy metrics, or incorporating the predictions into downstream decision-making processes.
[0031] In some embodiments, the pre-processing of the received data
includes handling missing values, normalizing the data, scaling the data, transforming the data into a format for the selected algorithm.
[0032] In some embodiments, training of the one of plurality of forecasting
models further includes a plurality of optimization methods. The plurality of optimization methods includes a statistical estimation, a gradient descent, or a backpropagation.
[0033] In some embodiments, the response comprises a link.
[0034] In some embodiments, a user equipment is communicatively
coupled with a system. The coupling comprises steps of receiving, by the system, a connection request and sending, by the system, an acknowledgment of the connection request to the UE. The coupling further comprises transmitting a plurality of signals in response to the connection request. The system is configured for performing an application programming interface (API)-based forecasting.
OBJECTS OF THE INVENTION
[0035] It is an object of the present disclosure to provide an Application
Programming Interface (API)-based forecasting system and method.
[0036] It is an object of the present disclosure to provide a system and a
method that uses Artificial Intelligence (AI) and Machine Learning (ML) for automating various stages of forecasting process, including data preprocessing, model training, and prediction generation.
[0037] It is an object of the present disclosure to integrate forecasting
capabilities into existing systems or applications through the API, by providing a standardized interface for users to send data and receive predictions.
[0038] It is an object of the present disclosure to provide a list of algorithms
from which the users may select or choose their desired algorithm(s) without needing to understand the intricate details of each algorithm.
[0039] It is an object of the present disclosure to provide a system and a
method that efficiently processes incoming data, applies the selected algorithm, and returns the predictions promptly.
[0040] It is an object of the present disclosure to provide a system and a
method that simplifies the deployment and maintenance of forecasting models.
5 BRIEF DESCRIPTION OF DRAWINGS
[0041] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not
10 necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components,
15 or circuitry commonly used to implement such components.
[0042] FIG. 1 illustrates an example network architecture for implementing
a system, in accordance with an embodiment of the present disclosure.
[0043] FIG. 2A illustrates an example block diagram of the system, in
accordance with an embodiment of the present disclosure.
20 [0044] FIG. 2B illustrates an example block diagram of a forecasting
engine, in accordance with an embodiment of the present disclosure.
[0045] FIG. 3 illustrates an example architecture of the system, in
accordance with an embodiment of the present disclosure.
[0046] FIG. 4A illustrates an example flow diagram implementing an
9
Application Programming Interface (API)-based forecasting method, in accordance with an embodiment of the present disclosure.
[0047] FIG. 4B illustrates an example flow diagram implementing a method
for Application Programming Interface (API)-based forecasting, in accordance with 5 an embodiment of the present disclosure.
[0048] FIG. 5 illustrates a computer system in which or with which the
embodiments of the present disclosure may be implemented.
[0049] The foregoing shall be more apparent from the following more
detailed description of the disclosure.
10 DETAILED DESCRIPTION
[0050] In the following description, for explanation, various specific details
are outlined in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features 15 described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
20 [0051] The ensuing description provides exemplary embodiments only and
is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the
25 function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
10
[0052] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other 5 components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail to avoid obscuring the embodiments.
[0053] Also, it is noted that individual embodiments may be described as a
10 process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional 15 steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0054] The word “exemplary” and/or “demonstrative” is used herein to
20 mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques 25 known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other elements.
11
[0055] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the 5 phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0056] The terminology used herein is to describe particular embodiments
10 only and is not intended to be limiting the disclosure. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or 15 components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any combinations of one or more of the associated listed items.
[0057] The various embodiments throughout the disclosure will be
20 explained in more detail with reference to FIGs. 1-5.
[0058] FIG. 1 illustrates an example network architecture (100) for
implementing a system (108), in accordance with an embodiment of the present disclosure.
[0059] As illustrated in FIG. 1, one or more computing devices (104-1, 104-
25 2…104-N) may be connected to the system (108) through a network (106). A person of ordinary skill in the art will understand that the one or more computing devices (104-1, 104-2…104-N) may be collectively referred as computing devices
12
(104) and individually referred as a computing device (104). One or more users (102-1, 102-2…102-N) may provide one or more requests to the system (108). A person of ordinary skill in the art will understand that the one or more users (102-1, 102-2…102-N) may be collectively referred as users (102) and individually 5 referred as a user (102). Further, the computing devices (104) may also be referred as a user equipment (UE) (104) or as UEs (104) throughout the disclosure.
[0060] In an embodiment, the computing device (104) may include, but not
be limited to, a mobile, a laptop, etc. Further, the computing device (104) may include one or more in-built or externally coupled accessories including, but not
10 limited to, a visual aid device such as a camera, audio aid, microphone, or keyboard. Furthermore, the computing device (104) may include a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR) devices, a laptop, a general-purpose computer, a desktop, a personal digital assistant, a tablet computer, and a mainframe computer. Additionally, input devices for receiving input from the user
15 (102) such as a touchpad, touch-enabled screen, electronic pen, and the like may be used.
[0061] In an embodiment, the network (106) may include, by way of
example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch,
20 process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc
25 network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
[0062] In an embodiment, the system (108) may integrate with one or more
13
computing device (104) associated with the users (102). The users (102) may collect relevant data for forecasting by receiving data from a base station, for example, eNodeB/gNodeB. The users (102) may send the collected data for forecasting to the system (108). The users (102) may also select an algorithm to be applied to the 5 received data, since the system (108) supports a wide range of algorithms for forecasting tasks.
[0063] Upon receiving the collected data for forecasting from the users
(102), with the selected algorithm, the system (108) may perform pre-processing of the received data. The pre-processing of the received data may include, but not 10 limited to, handle missing values, normalize or scale the collected data, and/or transform the collected data into a suitable format for the desired algorithm.
[0064] In an embodiment, the system (108) may refer as an application
programming interface (API)-Based forecasting system. The system (108) may train a forecasting model based on the selected algorithm and the pre-processed 15 data. Once the forecasting model is trained, the system (108) may generate predictions for future based on the received data. In an aspect, the system (108) may use the trained model and the forecasting algorithm to make predictions using relevant formulas or calculations corresponding to the desired algorithm.
[0065] In an embodiment, the system (108) may send the generated
20 predictions back to the users (102) through a response (e.g., link such as Uniform Resource Locator (URL)). The users (102) may receive the generated predictions and may perform necessary post-processing or analysis on the results within their own systems or applications. The post-processing or analysis may involve visualizing the generated predictions, comparing the generated predictions with 25 actual data, calculating forecast accuracy metrics, or incorporating the generated predictions into downstream decision-making processes.
[0066] In an aspect, visualizing of predictions may include a time series plot
14
that shows both historical data and forecasted values over time. This plot provides a visual comparison between the actual data points and the forecasted values, allowing users to evaluate the model's performance.
[0067] In an aspect, comparing the generated predictions with actual data
5 may include finding out differences between the predictions and actual data.
[0068] In an aspect, forecast accuracy metrics are measurements that show
the reliability of the forecast, which is a prediction of future trends based on historical data. These types of metrics measure the forecast error, which is the difference between an actual value and its expected forecast. Forecast accuracy is
10 the measure of how accurately a given forecast matches actual data. Forecast bias describes how much the forecast is consistently over or under the actual data. Common metrics used to evaluate forecast accuracy include Mean Absolute Percentage Error (MAPE) and Mean Absolute Deviation (MAD). The MAPE measures the average percentage difference between predicted and actual values.
15 The MAD is another metric used to measure the average absolute deviation of predicted values from the actual values. The MAD calculates the average absolute difference between each data point and the overall mean of the dataset.
[0069] In an aspect, incorporating the generated predictions into
downstream decision-making processes may include translating the generated 20 predictions into actionable decisions or interventions. Translating of the generated predictions into actionable decisions or interventions may comprise, but not limited to, establishing thresholds based on predictions to trigger alerts or actions (e.g., generating alert on detecting the predicated value is below the threshold), assessing accuracy of predictions, etc.
25 [0070] In an embodiment, the user equipment (104) is communicatively
coupled with the system (108). The system (108) may receive a connection request from the UE (104). The system (108) may send an acknowledgment of the
15
connection request to the UE (104). The UE (104) may transmit a plurality of signals in response to the connection request. The system (108) may be configured for performing an application programming interface (API)-based forecasting.
[0071] Although FIG. 1 shows exemplary components of the network
5 architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components 10 of the network architecture (100).
[0072] FIG. 2A illustrates an example block diagram (200A) of the system
(108), in accordance with an embodiment of the present disclosure.
[0073] Referring to FIG. 2A, in an embodiment, the API-Based forecasting
system (108) may include one or more processor(s) (202). The one or more
15 processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory
20 (204) of the API-Based forecasting system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile
25 memory such as random-access memory (RAM), or non-volatile memory such as erasable programmable read only memory (EPROM), flash memory, and the like.
[0074] In an embodiment, the API-Based forecasting system (108) may
16
include an interface(s) (206). The interface(s) (206) may comprise a variety of interfaces, for example, interfaces for data input and output devices (I/O), storage devices, and the like. The interface(s) (206) may facilitate communication through the API-Based forecasting system (108). The interface(s) (206) may also provide a 5 communication pathway for one or more components of the API-Based forecasting system (108). Examples of such components include, but are not limited to, processing engine(s) (208), a database (210), a forecasting engine (212), and other engine(s) (214). In an embodiment, the other engine(s) (214) may include, but not limited to, an input/output engine, a machine learning (ML) engine and a 10 notification engine.
[0075] In an embodiment, the processing engine(s) (208) may be
implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In examples described herein, such combinations of
15 hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may comprise a processing resource (for example, one or more processors), to execute such
20 instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may
25 be separate but accessible to the system and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry.
[0076] In an embodiment, the processor (202) may receive data from the
computing device (104) associated with the users (102). The processor (202) may
17
store the data in the database (210). The processor (202) may send the data to a forecasting engine (212). The forecasting engine (212) may generate a trained model based on the received data. In an aspect, during training of forecasting model, the forecasting model finds the relationships between the input variables (e.g., input 5 data) and the output variable (the predictions/forecasted value) based on historical patterns in the data.
[0077] In an embodiment, the forecasting engine (212) may utilize a range
of algorithms, such as, but not limited to, Autoregressive Integrated Moving Average (ARIMA), Exponential Smoothing, Prophet, or Long Short-Term Memory 10 (LSTM) networks for forecasting tasks.
[0078] In an embodiment, the forecasting engine (212) may generate
predictions for the future based on the input data using the trained model.
[0079] In an embodiment, the forecasting engine (212) may send the
generated predictions back to the users (102) through a response. The response may
15 comprise a link (e.g., uniform resource locator (URL)). In an aspect, the link refers to a clickable element that connects one web page to another, or to a specific section within the same page. Links are fundamental to navigating the web and are typically indicated by underlined text or icons that users can click on to access related content or resources. In an aspect, the uniform resource locator (URL) is a reference or
20 address used to locate resources on the internet. It's essentially a web address that specifies the location of a resource (such as a web page, file, or document).
[0080] In an embodiment, configuring of a plurality of forecasting models
to generate multiple predictions involves several steps, including selecting appropriate models, preprocessing data, training the model, combining their 25 outputs, and evaluating the predications. The selection of forecasting model from multiple forecasting models. The forecasting models may comprise, but not limited to, statistical models, machine learning models, deep learning models. The
18
statistical models may comprise autoregressive integrated moving average (ARIMA), exponential smoothing (ETS), etc. The machine learning models may comprise random forest, gradient boosting, support vector machines, etc. The deep learning models may comprise long short-term memory (LSTM), recurrent neural 5 networks (RNN). The preprocessing of data comprises data cleaning, normalization of data, scaling of data. The data cleaning comprises handling missing values, outliers, and noise. Normalization/Scaling of data comprises normalize or scale the data to ensure it fits well with the requirements of the models being used. The preprocessing of data may further comprise feature engineering. Feature
10 engineering is the process of selecting, manipulating, and transforming raw data into features that can be used in supervised learning. The feature is any measurable input that can be used in the predictive model. The features may include time-based features, lagged variables, and external variables. The training of models comprises splitting of data, model training, hyperparameter tuning, etc. The splitting of data
15 may divide the data into training and testing sets to evaluate model performance. The model training may train model on the training data. The hyperparameter tuning may optimize the hyperparameters of each model using techniques like grid search or random search. In hyperparameter tuning is the process of selecting the optimal values for model’s hyperparameters. The hyperparameters are
20 configuration variables. The hyperparameters are used to tune the performance of the model. The generation of predications comprises use of each trained model to generate forecasts (e.g., predications) on the test data or new input data. The generated predictions are combined using techniques such as averaging, weighted averaging, stacking, voting, etc. In the averaging, computing the average of the
25 predictions from different models. In weighted averaging, assigning weights to each model based on performance metrics (e.g., root mean squared error (RMSE), mean absolute error (MAE)) and compute a weighted average. In an aspect, the RMSE measures average difference between values predicted by a model and the actual values. In stacking, combining the predictions of multiple well-performing models.
30 The stacking is used to train several different models on the same data and then use their predictions as input to a final model. The final model then uses these
19
predictions as input to make its own prediction. In voting, for classification tasks, use majority voting to determine the final prediction.
[0081] The evaluation of performance comprises validation and metrics.
The combined predictions are evaluated against a validation set to ensure the 5 ensemble model performs better than individual models. The performance metrics such as Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), or Mean Absolute Percentage Error (MAPE) are used to assess model accuracy.
[0082] In an example, for forecasting of control plane signaling data in a
cellular network, the user may input control plane signaling data via the API to the
10 API based forecasting engine. The control plane signaling data may comprise historical signaling data and network performance metrics. The historical signaling data comprises historical data on signaling messages such as attach requests, detach requests, location updates, handover requests, etc. This data further includes timestamps and the type of signaling event. The network performance metrics
15 includes data such as cell traffic load, subscriber density, geographical location, and time of day. The algorithm (e.g., machine learning (ML) algorithm) is selected. The input data is pre-processed (i.e. handling missing values, removing outliners) and normalized into suitable format. The forecasting model (e.g., ML models) is trained based on pre-processed data and the selected algorithm. In training, the historical
20 data is split into training and validation sets. The forecasting model is trained on the training data and validate its performance using the validation sets. The trained forecasting model generates forecasts for the number of signaling messages (e.g., attach requests) over the forecast horizon (e.g., next 10 days). The generated forecasts are provided to the user in the form of a link (e.g., URL). The generated
25 forecasts (e.g., predicted signaling demands) may be used in capacity adjustment, improving network configuration, radio resources allocations.
[0083] Although FIG. 2A shows exemplary components of the API-Based
forecasting system (108), in other embodiments, the API-Based forecasting system
20
(108) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 2A. Additionally, or alternatively, one or more components of the API-Based forecasting system (108) may perform functions described as being performed by 5 one or more other components of the API-Based forecasting system (108).
[0084] FIG. 2B illustrates an example block diagram (200B) of the
forecasting engine (212), in accordance with an embodiment of the present disclosure.
[0085] The forecasting engine (212) comprises a receiving unit (222), a
10 processing unit (224), a sending unit (226) and a memory unit (228).
[0086] The receiving unit (222) is configured to receive data for forecasting
from a user (102) via an application programming interface (API). One of plurality of algorithms to apply to the received data is selected by the user. In an example, the received data comprises, but not limiting to, data corresponding to the cellular
15 network such as control plane signalling data, user plane data, quality of service (QoS) parameters, cell measurements. The QoS parameters comprises bandwidth, packet delay, packet loss, priority levels. The cell measurements comprise downlink and uplink signal strength, interference levels, load balancing and channel conditions. Further, the received data comprises data corresponding to, but not
20 limited to, network traffic, bandwidth demand, network performance demand for network services, network security, capacity planning, network outages and downtime, etc. In an aspect, the forecasting engine may be used in various fields such as, but not limited to, cellular network, business, finance, economics, meteorology, etc.
25 [0087] The processing unit (224) is configured to pre-process the received
data. The pre-processing of the received data comprises, but not limited to, handling missing values, normalizing the data, scaling the data, transforming the data into a
21
format for the selected algorithm. In an aspect, handling missing values may include checking for missing values in the dataset and performing actions such as removing rows with missing values, imputing missing values, etc. Normalizing the data may include transforming the data so that all features are on a comparable scale. Scaling 5 the data may include ensuring that all features have a similar scale. Transforming the data may include modifying the data in a way that improves the performance of the model or making the data more suitable for analysis.
[0088] The processing unit (224) is configured to train one of a plurality of
forecasting models based on the one selected algorithm from plurality of algorithms 10 and the pre-processed received data. The trained one of plurality of forecasting models is configured to generate a plurality of predictions.
[0089] The sending unit (226) is configured to send the generated plurality
of predictions to the user through a response. The response may comprise a link (e.g., uniform resource locator (URL)) The predictions are post-processed. The 15 post-processing comprises visualizing the predictions, comparing the predictions with actual data, calculating forecast accuracy metrics, or incorporating the predictions into downstream decision-making processes.
[0090] The memory unit (228) is configured to store data corresponding to
other units (e.g., the receiving unit (222), the processing unit (224), or sending unit 20 (226)), which may be fetched and analysed.
[0091] FIG. 3 illustrates an example system architecture (300), in
accordance with an embodiment of the present disclosure.
[0092] The system architecture (300) comprises a user (302), a user
application (304), an API integration (306) and an API based forecasting engine 25 (308). The system architecture (300) comprises a queuing/processing protocol (310) and a hypertext transfer protocol (http) (312). In an aspect, the system architecture (300) may refer as API-based forecasting system.
22
[0093] In an aspect, Application Programming Interfaces (APIs) are
essential tools that allow different applications or services to communicate and interact with each other. APIs define the methods and data formats to request and exchange information between software components.
5 [0094] In an aspect, the API-based forecasting engine (308) utilizes
Application Programming Interfaces (APIs) to provide forecasting capabilities to external applications or systems. The API-based forecasting engine (308) typically exposes a set of endpoints or functions through which users can submit data and receive forecasts or predictions in return.
10 [0095] The user (302) may integrate user’s system or the user application
(304) with the API based forecasting engine (308) via the API integration (306). The API integration (306) connects the user’s system or the user application (304) to the API based forecasting engine (308) through the http protocol (312).
[0096] In an aspect, the user (302) may send data for forecasting via the
15 user application (304). The data for forecasting is sent to the forecasting engine (308) via API requests. The data may contain information such as historical time series data and relevant features. In an example, the data comprises cellular network data such as control plane signalling data, user plane data, quality of service (QoS), cell measurements received from a network infrastructure eNodeB/gNodeB. As the 20 users can send the data to the engine via the API requests, thereby eliminating the need to manually input data or rely on external tools. This saves time and efforts in the forecasting process.
[0097] The forecasting engine (308) performs steps such as data pre-
processing, forecasting, predicting output data processing, and post-processing and 25 analysis of the output data. The forecasting engine (308) uses queuing/processing protocol (310) to pass data between the steps.
[0098] The forecasting engine (308) generates the forecasting output (e.g.
23
predictions). The user selects a desired algorithm from a plurality of algorithms. The plurality of algorithms comprises statistical models, machine learning models, and deep learning models. The forecasting engine supports a range of algorithms specifically designed for forecasting tasks. Further, the users do not need to possess 5 in-depth knowledge of machine learning algorithms or forecasting techniques. The pre-built algorithms (which are specifically designed for forecasting tasks) offered by the engine. This reduces the barrier to entry for utilizing forecasting capabilities.
[0099] The forecasting engine (308) performs the following steps:
[00100] At step 308-1, the forecasting engine (308) performs preprocessing
10 of the input data. The preprocessing comprises handling missing values, normalizing or scaling the data, or transforming the data into a format for the selected algorithm.
[00101] At step 308-2, the forecasting engine (308) performs forecasting
model training. The forecasting engine (308) trains forecasting model based on the 15 selected algorithm and the pre-processed data.
[00102] At step 308-3, the forecasting engine (308) performs predicting
output data processing. After training the model, the forecasting engine (308) generates predictions for future based on the input data. The forecasting engine (308) uses the trained model and the forecasting algorithm to make predictions 20 using relevant formulas or calculations corresponding to the chosen algorithm. The forecasting engine (308) sends the generated predictions back to the user through the response. The results comprise, but not limited to, the predicted values, associated timestamps, and any additional relevant information.
[00103] At step 308-4, the forecasting engine (308) performs post processing
25 and analysis of the output data.
[00104] In an aspect, the user may perform any necessary post-processing or
24
analysis on the predictions within user’s system or the user application (304). The post-processing or analysis involves visualizing the predictions, comparing them with actual data, calculating forecast accuracy metrics, or incorporating the predictions into downstream decision-making processes.
5 [00105] The API-based forecasting can handle a large volume of requests
from multiple users simultaneously. The users can make API calls as needed, and the engine can process and provide predictions efficiently. In this way, the API based forecasting allows scalability in handling large volume of user requests.
[00106] As illustrated in FIG. 3, in an embodiment, users (302) may collect
10 relevant data for forecasting upon receiving data from the base station, for example, eNodeB/gNodeB in a cellular network. The types of input data that may be received from the base station may include, but not limited to, control plane signalling data, user plane data, Quality of Service (QoS) information, and cell measurements. The data received may vary depending on a network technology, deployment, and 15 capabilities of the base station and the UE.
[00107] The control plane signalling data may include, but not limited to,
system information messages such as information related to cell configuration, neighbouring cells, network parameters, and the like, and paging messages such as messages used to notify the user equipment (UE) of incoming calls or messages.
20 [00108] The user plane data may include, but not limited to, user data packets
such as data packets transmitted between the base station and the UE i.e., it encompasses various devices, including mobile phones, smartphones, tablets, laptops, Internet of Things (IoT) devices, and other wireless communication devices used by the users to connect to the cellular network, including voice calls,
25 internet data, or multimedia content.
[00109] QoS information may include, but not limited to, QoS parameters
such as information related to the QoS requirements and preferences of the UE,
25
including bandwidth, packet delay, packet loss, or priority levels.
[00110] Cell measurements may include, but not limited to, base station
measurements such as downlink and uplink signal strength, interference levels, load balancing, or channel conditions.
5 [00111] The API-Based forecasting system (108) may integrate with the user
interface and receive the data for forecasting via the user interface from the users (102). In an aspect, the users (102) may provide data via the user application (304).
[00112] Upon receiving the collected data for forecasting from the users
(102), the API-Based forecasting system (108) may pre-process the data and
10 normalize the data into a suitable format. The API-Based forecasting system (108) may use the pre-processed and normalized data to train a forecasting model. The forecasting model may be used for generating predictions for the future based on the received data. The API-Based forecasting system (108) may also send the generated predictions back to the users (102) through a response. The users (102)
15 may receive the generated predictions and may perform any necessary post-processing or analysis on the results within their own systems or applications. This may involve visualizing the generated predictions, comparing the generated predictions with actual data, calculating forecast accuracy metrics, or incorporating the generated predictions into downstream decision-making processes.
20 [00113] FIG. 4A illustrates an example flow diagram (400A) implementing
an API-based forecasting method, in accordance with an embodiment of the present disclosure.
[00114] As illustrated in FIG. 4A, the forecasting method (400A) may
include the following steps:
25 [00115] User integration: The users (102) may integrate their own systems
or applications with the API-based forecasting system (108). This integration
26
involves connecting the user system to the system’s API endpoints, typically through Hypertext Transfer Protocol (HTTP) requests. The API-based forecasting system provides users with easy access to forecasting capabilities without requiring the users to develop their own models or algorithms. The users can leverage the 5 engine's forecasting functionalities by simply integrating their systems or applications with the API.
[00116] At step 402, data input: The users (102) may collect relevant data for
forecasting by receiving the control plane signalling data, the user plane data, the QoS information, and the cell measurements from the base station. The users (102) 10 may send the collected data to the API-Based forecasting system (108) via API requests. The data may be typically formatted in a file format, but not limited to, as JavaScript Object Notation (JSON) format, containing the necessary information for forecasting, such as historical time series data or relevant features.
[00117] At step 404, algorithm selection: The users (102) may select a
15 desired algorithm to be applied to the input data. The API-Based forecasting system (108) may support a range of algorithms such as, but not limited to, ARIMA, exponential smoothing, prophet, or LSTM networks for forecasting tasks.
[00118] At step 406, data pre-processing: The API-Based forecasting system
(108) may perform any required data pre-processing steps on the input data. The 20 pre-processing steps include, but not limited to, handling missing values, normalizing or scaling the data, or transforming the input data into a suitable format for the selected algorithm.
[00119] At step 408, model training: Based on the selected algorithm and the
input data, the API-Based forecasting system (108) may train a forecasting model. 25 The training process may depend on the selected algorithm and may involve optimization techniques/methods such as, but not limited to, a statistical estimation (e.g., maximum likelihood estimation (MLE)), gradient descent, or
27
backpropagation. In an aspect, the optimization techniques/methods may use in forecasting to enhance accuracy, efficiency, and reliability of predictions. With the help of optimization techniques/methods, the forecasting models can be fine-tuned to deliver more accurate predictions, better utilize resources, and adapt to evolving 5 data patterns effectively.
[00120] At step 410, prediction generation: Once the model is trained, the
API-Based forecasting system (108) may generate predictions for the future based on the input data, to prevent network traffic and network congestion, and improve performance of the network. The API-Based forecasting system (108) may use the 10 trained model and the forecasting algorithm to make predictions using relevant formulas or calculations corresponding to the selected algorithm.
[00121] At step 412, output results: The system (102) may send the generated
predictions back to the user (102) through a response. The response may comprise a link (e.g., URL). The results may be typically returned as, but not limited to, JSON 15 or another suitable format, containing the predicted values, associated timestamps, and any additional relevant information.
[00122] At step 414, post-processing, and analysis: The users (102) may
receive the predictions and perform any necessary post-processing or analysis on the results within their own systems or applications. The post-processing or analysis 20 steps may involve, but not limited to, visualizing the predictions, comparing the predictions with the actual data, calculating forecast accuracy metrics, or incorporating the predictions into downstream decision-making processes.
[00123] The API-based forecasting enables seamless integration with
existing systems or applications. The users can easily incorporate the forecasting 25 engine into their workflows by making API requests. The predictions are received through a response. This ensures smooth integration with minimal disruption to the existing processes.
28
[00124] FIG. 4B illustrates an example flow diagram implementing a method
(400B) for Application Programming Interface (API)-based forecasting, in accordance with an embodiment of the present disclosure.
[00125] At step 422, the method (400B) includes receiving data for
5 forecasting from a user (302) via the API (306). The user selects one of a plurality of algorithms to apply to the received data. In an example, the received data comprises, but not limited to, control plane signalling data, user plane data, quality of service (QoS) parameters, cell measurements. In an example, the plurality of algorithms comprise statistical models, machine learning models, deep learning 10 models. The plurality of algorithms may comprise, but not limited to, an autoregressive integrated moving average (ARIMA), an exponential smoothing, a prophet, a random forest, a gradient boosting, support vector machines, recurrent neural networks (RNN) or a long short-term memory (LSTM).
[00126] At step 424, the method (400B) includes pre-processing the received
15 data. The pre-processing of the received data comprises, but not limited to, handling missing values, normalizing the data, scaling the data, transforming the data into a format for the selected algorithm.
[00127] At step 426, the method (400B) includes training one of a plurality
of forecasting models based on the one selected algorithm from plurality of 20 algorithms and the pre-processed received data. In an aspect, the training of the at least one of plurality of forecasting models further includes a plurality of optimization methods. The plurality of optimization methods includes statistical estimation, a gradient descent, or a backpropagation.
[00128] At step 428, the method (400B) includes generating a plurality of
25 predictions by the trained one of plurality of forecasting models.
[00129] At step 430, the method (400B) includes sending the generated
plurality of predictions to the user through a response. The response may comprise
29
link (e.g., URL). Further, the predictions are post-processed. The post-processing comprises, but not limited to, visualizing the predictions, comparing the predictions with actual data, calculating forecast accuracy metrics, or incorporating the predictions into downstream decision-making processes.
5 [00130] FIG. 5 illustrates an example computer system (500) in which or
with which the embodiments of the present disclosure may be implemented.
[00131] As shown in FIG. 5, the computer system (500) may include an
external storage device (510), a bus (520), a main memory (530), a read-only memory (540), a mass storage device (550), a communication port(s) (560), and a
10 processor (570). A person skilled in the art will appreciate that the computer system (500) may include more than one processor and communication ports. The processor (570) may include various modules associated with embodiments of the present disclosure. The communication port(s) (560) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit
15 or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication ports(s) (560) may be chosen depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (500) connects.
[00132] In an embodiment, the main memory (530) may be Random Access
20 Memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (540) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (570). The mass storage device (550) may be any current or future mass 25 storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having
30
Universal Serial Bus (USB) and/or Firewire interfaces).
[00133] In an embodiment, the bus (520) may communicatively couple the
processor(s) (570) with the other memory, storage, and communication blocks. The bus (520) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended 5 (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system (500).
[00134] In another embodiment, operator and administrative interfaces, e.g.,
10 a display, keyboard, and cursor control device may also be coupled to the bus (520) to support direct operator interaction with the computer system (500). Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) (560). Components described above are meant only to exemplify various possibilities. In no way should 15 the aforementioned exemplary computer system (500) limit the scope of the present disclosure.
[00135] While considerable emphasis has been placed herein on the preferred
embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from 20 the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.
25 [00136] The present disclosure provides technical advancement related to
forecasting. This advancement addresses the limitations of existing solutions including problems of forecasting technology. The disclosure involves, API-based
31
forecasting which offer significant improvements in forecasting such as integration of forecasting capabilities through the API, providing a range of pre-built algorithms specifically designed for forecasting tasks, automation of the forecasting process. By implementing the API-based forecasting, the disclosed invention offers advantages such as easy access to forecasting capabilities, making API requests, receiving predictions through a response, and eliminating the need for manual data input.
ADVANTAGES OF THE INVENTION
[00137] The present disclosure provides a system and a method that uses
Artificial Intelligence (AI) and Machine Learning (ML) for automating various stages of forecasting process, including data pre-processing, model training, and prediction generation, thereby reducing the burden on users and saving valuable time and effort of the users. This enables the users to focus on the data they want to forecast rather than the technical aspects of the modelling process.
[00138] The present disclosure integrates forecasting capabilities into
existing systems or applications through an Application Programming Interface (API), which provides a standardized interface for users to input data and receive predictions.
[00139] The present disclosure provides a list of algorithms from which the
users can select or choose their desired algorithm without needing to understand the details of each algorithm.
[00140] The present disclosure provides the system efficiently processes
incoming data, applies the selected algorithm, and promptly returns predictions. This scalability and efficiency enable users to effectively leverage forecasting capabilities, even with high data volumes or demanding requirements..
[00141] The present disclosure provides the system that simplifies the
deployment and maintenance of forecasting models. The API-based forecasting approach ensures that the users to access the latest version of the forecasting system and eliminate the need for manual maintenance.
We claim:
1. An application programming interface (API)-based forecasting system
(108, 300) comprising a forecasting engine (212, 308), the forecasting
engine (212, 308) comprising:
a receiving unit (222) configured to receive data for forecasting from a user (102, 302) via an API (306) and at least one of plurality of algorithms selected by the user (102, 302) to apply to the received data;
a processing unit (224) configured to:
pre-process the received data; and
train at least one of plurality of forecasting models based on
the at least one selected algorithm from plurality of algorithms and
the pre-processed received data, wherein the at least one trained
forecasting model of plurality of forecasting models is configured to
generate a plurality of predictions; and
a sending unit (226) configured to send the generated plurality of predictions to the user (102, 302) through a response.
2. The system (108, 300) as claimed in claim 1, wherein
the predictions are post-processed, wherein the post-processing includes visualizing the predictions, comparing the predictions with actual data, calculating forecast accuracy metrics, or incorporating the predictions into downstream decision-making processes.
3. The system (108, 300) as claimed in claim 1, wherein
the pre-processing of the received data includes handling missing values, normalizing the data, scaling the data, transforming the data into a format for the selected algorithm.
4. The system (108, 300) as claimed in claim 1, wherein training of the at least one of plurality of forecasting models further includes a plurality of optimization methods, wherein the plurality of optimization methods includes a statistical estimation, a gradient descent, or a backpropagation.
5. The system (108, 300) as claimed in claim 1, wherein the response comprises a link.
6. A method (400B) for performing an application programming interface (API)-based forecasting by a forecasting engine (212, 308), the method comprising:
receiving (422) data for forecasting from a user (102, 302) via the API (306) and at least one of a plurality of algorithms selected by the user (102, 302) to apply to the received data;
pre-processing (424) the received data;
training (426) at least one of plurality of forecasting models based on the at least one selected algorithm from plurality of algorithms and the pre-processed received data, wherein the at least one trained forecasting model of plurality of forecasting models is configured to generate (428) a plurality of predictions; and
sending (430) the generated plurality of predictions to the user through a response.
7. The method (400B) as claimed in claim 5, wherein
the predictions are post-processed, wherein the post-processing includes visualizing the predictions, comparing the predictions with actual data, calculating forecast accuracy metrics, or incorporating the predictions into downstream decision-making processes.
8. The method (400B) as claimed in claim 6, wherein
the pre-processing of the received data includes handling missing values, normalizing the data, scaling the data, transforming the data into a format for the selected algorithm.
9. The method (400B) as claimed in claim 6, wherein
training of the at least one of plurality of forecasting models further includes a plurality of optimization methods, wherein the plurality of optimization methods includes a statistical estimation, a gradient descent, or a backpropagation.
10. The method (400B) as claimed in claim 6, wherein the response comprises a link.
11. A user equipment (104) communicatively coupled with a system (108), the coupling comprises steps of:
receiving, by the system (108), a connection request;
sending, by the system (108), an acknowledgment of the connection request to the UE (104); and
transmitting a plurality of signals in response to the connection request, wherein the system (108) is configured for performing an application programming interface (API)-based forecasting as claimed in claim 1.
| # | Name | Date |
|---|---|---|
| 1 | 202321049636-STATEMENT OF UNDERTAKING (FORM 3) [24-07-2023(online)].pdf | 2023-07-24 |
| 2 | 202321049636-PROVISIONAL SPECIFICATION [24-07-2023(online)].pdf | 2023-07-24 |
| 3 | 202321049636-FORM 1 [24-07-2023(online)].pdf | 2023-07-24 |
| 4 | 202321049636-DRAWINGS [24-07-2023(online)].pdf | 2023-07-24 |
| 5 | 202321049636-DECLARATION OF INVENTORSHIP (FORM 5) [24-07-2023(online)].pdf | 2023-07-24 |
| 6 | 202321049636-FORM-26 [19-10-2023(online)].pdf | 2023-10-19 |
| 7 | 202321049636-FORM-26 [26-04-2024(online)].pdf | 2024-04-26 |
| 8 | 202321049636-FORM 13 [26-04-2024(online)].pdf | 2024-04-26 |
| 9 | 202321049636-FORM-26 [30-04-2024(online)].pdf | 2024-04-30 |
| 10 | 202321049636-Request Letter-Correspondence [03-06-2024(online)].pdf | 2024-06-03 |
| 11 | 202321049636-Power of Attorney [03-06-2024(online)].pdf | 2024-06-03 |
| 12 | 202321049636-Covering Letter [03-06-2024(online)].pdf | 2024-06-03 |
| 13 | 202321049636-ENDORSEMENT BY INVENTORS [01-07-2024(online)].pdf | 2024-07-01 |
| 14 | 202321049636-DRAWING [01-07-2024(online)].pdf | 2024-07-01 |
| 15 | 202321049636-CORRESPONDENCE-OTHERS [01-07-2024(online)].pdf | 2024-07-01 |
| 16 | 202321049636-COMPLETE SPECIFICATION [01-07-2024(online)].pdf | 2024-07-01 |
| 17 | 202321049636-CORRESPONDENCE(IPO)-(WIPO DAS)-10-07-2024.pdf | 2024-07-10 |
| 18 | 202321049636-ORIGINAL UR 6(1A) FORM 26-100724.pdf | 2024-07-15 |
| 19 | Abstract1.jpg | 2024-08-02 |
| 20 | 202321049636-FORM 18 [01-10-2024(online)].pdf | 2024-10-01 |
| 21 | 202321049636-FORM 3 [12-11-2024(online)].pdf | 2024-11-12 |