Abstract: The present disclosure provides a system (108) and a method (400) for bandwidth optimization in a wireless network. The system (108) may collect call data records (CDRs) and bandwidth usage data for a particular time period using a data collection engine (212). The system (108) normalizes and pre-processes the collected data for feature extraction and further analysis using a computation engine (214). The system (108) may extract and compute relevant features from the normalized and pre-processed data to capture call patterns and user characteristics using the computation engine (214). The system (108) trains a machine learning (ML) model using the pre-processed data and the computed features to learn the call characteristics and the bandwidth usage patterns of the network using an analyzing engine (216). The trained model predicts future bandwidth utilization, generates proactive recommendations, and notifies the network operations team for optimized bandwidth allocation and network resource planning. Fig. 3
FORM 2
THE PATENTS ACT, 1970
THE PATENTS RULE 0) 003
COMPLETE SPECIFICATION
APPLICANT
JIO PLATFORMS LIMITED
of Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India; Nationality: India
The following specification particularly describes
the invention and the manner in which
it is to be performed
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF INVENTION
[0002] The present disclosure generally relates to a wireless
telecommunications network. More particularly, the present disclosure relates to a system and a method for bandwidth optimization in a wireless network.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally
intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
[0004] The expression ‘Bandwidth optimization’ used hereinafter in the
specification refers to strategies and techniques employed to facilitate efficient network bandwidth utilization. Bandwidth signifies the capability of a network to transmit data, and the bandwidth is optimized to improve network performance and user experience.
[0005] The expression ‘Call Data Records (CDRs)’ used hereinafter in the
specification refers to detailed records of telephone calls that are generated by telephone exchanges or other telecommunications equipment. They typically contain various attributes of a call, such as the originating and destination phone
numbers, the duration of the call, the time the call was made, and other relevant information.
[0006] The expression ‘Application Programming Interfaces (APIs)’ used
hereinafter in the specification refers to sets of rules, protocols, and tools that define how software components should interact with each other. They specify the kinds of requests that can be made, how they should be made, the data formats that should be used, and the conventions to follow. APIs allow different software applications to communicate and exchange data with each other, enabling developers to build new applications that integrate with existing systems.
[0007] These definitions are in addition to those expressed in the art.
BACKGROUND OF THE INVENTION
[0008] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admission of the prior art.
[0009] The concept of bandwidth optimization encompasses a set of
strategies and techniques aimed at maximizing the efficient utilization of network bandwidth. Bandwidth represents the capacity of a network to transmit data. The primary goal of bandwidth optimization is to enhance network performance and improve user experience by ensuring that available bandwidth is used effectively and responsibly. This may involve implementing protocols, traffic prioritization, compression techniques, and other methods to streamline data transmission and minimize congestion. By employing these practices, organizations and individuals can make the most of their network resources while maintaining high performance and reliability. Currently, bandwidth requirements are only catered after users complain about network congestion. The additional bandwidth is then allocated
based on resource availability, which is time-consuming and degrades user satisfaction.
[0010] In recent years, the explosive growth of mobile devices and data-
intensive applications has led to a significant surge in bandwidth consumption in wireless networks. Telecom operators are constantly faced with the challenge of managing and optimizing network resources to meet the ever-increasing demand for high-speed and reliable connectivity.
[0011] Traditionally, bandwidth allocation and network resource planning
have been reactive in nature, often relying on manual intervention and ad-hoc solutions. Network operators typically respond to bandwidth issues after they occur, leading to network congestion, poor user experience, and customer dissatisfaction. The lack of proactive and data-driven approaches to bandwidth optimization has resulted in suboptimal network performance and inefficient utilization of network resources.
[0012] One of the major drawbacks of existing solutions is the inability to
accurately predict future bandwidth requirements. Without reliable forecasting, network operators struggle to make informed decisions about capacity planning and infrastructure investments. This often results in over-provisioning or under-provisioning of network resources, leading to increased costs and reduced service quality.
[0013] Another significant issue is the lack of granular insights into network
usage patterns and user behaviour. Existing solutions often fail to capture and analyze the vast amount of data generated by mobile networks, such as call data records (CDRs) and bandwidth usage logs. Without a comprehensive understanding of how users consume bandwidth and how network resources are utilized, operators cannot optimize their networks effectively.
[0014] Furthermore, current approaches to bandwidth optimization often
rely on manual and rule-based processes, which are time-consuming, error-prone,
and fail to adapt to the dynamic nature of mobile networks. The lack of automation and intelligent decision-making capabilities hinders the ability of network operators to respond quickly to changing network conditions and user demands.
[0015] There is, therefore, a need in the art to provide an improved system
and method that can mitigate the problems associated with the prior arts.
SUMMARY
[0016] The present disclosure discloses a system for bandwidth
optimization in a wireless network. The system includes a memory and one or more processor(s). The one or more processor(s) are configured to fetch and execute instructions stored in the memory to collect, by a data collection engine, one or more call data records (CDRs) and bandwidth usage data for a predefined time period. The one or more processor(s) are configured to normalize and pre-process, by a computation engine, the collected data to generate a normalized and pre-processed data. The one or more processor(s) are configured to extract and compute, by the computation engine, one or more relevant features from the normalized and pre-processed data. The one or more processor(s) are configured to facilitate an analyzing engine configured to receive the one or more relevant extracted features and the collected data and is further configured to capture one or more call patterns and user characteristics based on the one or more relevant features and collected data to predict a future bandwidth utilization. The one or more processor(s) are configured to generate one or more proactive recommendations for optimized bandwidth allocation and network resource planning based on the predicted future bandwidth utilization.
[0017] In an embodiment, the analyzing engine is configured to train a ML
model based on the pre-processed data and the computed features, wherein the ML model is trained to learn call characteristics and bandwidth usage patterns of the network.
[0018] In an embodiment, the data collection engine is configured to collect
and aggregate CDRs and bandwidth usage data from one or more network sources including base stations, routers, switches, and gateways.
[0019] In an embodiment, the computation engine is configured to perform
data cleaning, normalization, and feature engineering to prepare the normalized and pre-processed data for the analyzing engine.
[0020] In an embodiment, the system further includes a notification engine
configured to generate visualizations and reports of the future predicted bandwidth utilization and one or more generated recommendations.
[0021] In an embodiment, the notification engine is configured to notify a
network operations team regarding the predicted future bandwidth utilization and the proactive recommendations.
[0022] In an embodiment, the ML model is trained using historical CDRs,
bandwidth usage patterns, network performance metrics, and other relevant data to identify patterns and anomalies that indicate potential bandwidth issues and to learn the relationships between various network parameters and bandwidth consumption.
[0023] In an embodiment, the one or more relevant features include at least
one or more peak usage hours, high-traffic locations, popular applications and services, user demographics.
[0024] In an embodiment, the data collection engine, the computation
engine, and the analyzing engine are deployed on a cloud-based platform and accessible via a web-based interface, an Application Programming Interface (API), and a mobile application, enabling seamless integration with existing network management systems and tools used by the network operations team.
[0025] The present disclosure further discloses a method for bandwidth
optimization in a wireless network. The method includes collecting, by a data collection engine, one or more call data records (CDRs) and bandwidth usage data for a predefined time period. The method includes normalizing and pre-processing,
by a computation engine, the collected data to generate a normalized and pre-processed data. The method includes extracting and computing, by the computation engine, one or more relevant features from the normalized and pre-processed data. The method includes facilitating an analyzing engine for receiving the one or more relevant extracted features and the collected data and capturing one or more call patterns and user characteristics based on the one or more relevant features and collected data to predict a future bandwidth utilization. The method includes generating one or more proactive recommendations for optimized bandwidth allocation and network resource planning based on the predicted future bandwidth utilization.
[0026] In an embodiment, the method includes training, by the analyzing
engine, a ML model based on the pre-processed data and the computed features to learn call characteristics and bandwidth usage patterns of the network.
[0027] In an embodiment, the method includes notifying, by the analyzing
engine, a network operations team regarding the predicted future bandwidth utilization and the proactive recommendations.
[0028] In an embodiment, the step of collecting the CDRs and bandwidth
usage data involves gathering data from one or more network sources including base stations, routers, switches, and gateways.
[0029] In an embodiment, the step of extracting and computing the one or
more relevant features involves applying one or more statistical analysis and data mining techniques to identify significant patterns and correlations in the pre-processed data.
[0030] In an embodiment, the step of training the ML model involves using
supervised and unsupervised learning algorithms to build accurate predictive models.
[0031] In an embodiment, the step of predicting the future bandwidth
utilization involves using the trained ML model to forecast bandwidth consumption for different network segments, locations, and time periods.
[0032] In an embodiment, the step of generating the one or more proactive
recommendations involves using optimization algorithms to suggest optimal bandwidth allocation strategies and network configurations based on the predicted utilization.
[0033] In an embodiment, the step of notifying the network operations team
involves generating reports, dashboards, and alerts that highlight the predicted bandwidth utilization and one or more generated recommendations.
[0034] In an embodiment, the one or more proactive recommendations
including adding new network capacity, optimizing network configurations, migrating users to different network technologies, and implementing traffic shaping and prioritization policies, wherein the recommendations enable the network operator to optimize network performance and user experience.
[0035] The present disclosure further discloses user equipment that is
communicatively coupled to a system to support bandwidth optimization in a wireless network. The user equipment is configured to collect call data records (CDRs) and bandwidth usage data for the predefined time period. The user equipment is configured to transmit the CDRs and bandwidth usage data to the system, wherein the system is configured for bandwidth optimization in the wireless network. The system includes the memory and the one or more processor(s). The one or more processor(s) are configured to fetch and execute instructions stored in the memory to collect, by the data collection engine, one or more call data records (CDRs) and bandwidth usage data for a predefined time period. The one or more processor(s) are configured to normalize and pre-process, by the computation engine, the collected data to generate a normalized and pre-processed data. The one or more processor(s) are configured to extract and compute, by the computation engine, one or more relevant features from the normalized and pre-processed data.
The one or more processor(s) are configured to facilitate an analyzing engine
configured to receive the one or more relevant extracted features and the collected
data and is further configured to capture one or more call patterns and user
characteristics based on the one or more relevant features and collected data to
5 predict a future bandwidth utilization. The one or more processor(s) are configured
to generate one or more proactive recommendations for optimized bandwidth allocation and network resource planning based on the predicted future bandwidth utilization.
OBJECTS OF THE INVENTION
10 [0036] It is an object of the present disclosure to provide a system and a
method for bandwidth optimization in a wireless network by leveraging machine learning techniques to accurately predict future bandwidth utilization, extract relevant insights from network data, and generate proactive recommendations for optimized resource allocation and network planning.
15 [0037] It is an object of the present disclosure to collect and analyze past
network data, including call data records (CDRs), bandwidth usage data, and other relevant features, to capture call patterns, user characteristics, and network usage patterns for detecting bandwidth requirements.
[0038] It is an object of the present disclosure to provide a system and a
20 method that utilizes an analyzing engine to train ML models based on pre-processed
network data and computed features, enabling the models to learn call characteristics and bandwidth usage patterns of the network, predict future bandwidth utilization, and generate proactive recommendations for optimized bandwidth allocation and network resource planning.
25 [0039] It is an object of the present disclosure to provide a system and a
method that facilitates proactive bandwidth provisioning and infrastructure upgrades by predicting network congestion and promptly identifying areas that may
9
require additional bandwidth, thus ensuring seamless user experience and optimal network performance.
[0040] It is an object of the present disclosure to provide a system and a
method that predicts and proactively fulfils additional bandwidth and capacity
5 requirements based on anticipated future demand, enabling network operators to
make informed decisions about capacity planning and infrastructure investments.
[0041] It is an object of the present disclosure to provide a system and a
method that continuously monitors and optimizes the network by periodically
collecting and analyzing network data, adapting to changing network conditions
10 and user demands in real-time, and providing a closed-loop feedback mechanism
for effective network management and optimization.
BRIEF DESCRIPTION OF DRAWINGS
[0042] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the
15 disclosed methods and systems which like reference numerals refer to the same
parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each
20 component. It will be appreciated by those skilled in the art that disclosure of such
drawings includes the disclosure of electrical components, electronic components, or circuitry commonly used to implement such components.
[0043] FIG. 1 illustrates an exemplary network architecture for
implementing a system for bandwidth optimization in a wireless network, in
25 accordance with an embodiment of the present disclosure.
[0044] FIG. 2 illustrates an exemplary block diagram of the system, in
accordance with an embodiment of the present disclosure.
10
[0045] FIG. 3 illustrates an exemplary architecture of the system, in
accordance with an embodiment of the present disclosure.
[0046] FIG. 4 illustrates an exemplary flow diagram of a method for
bandwidth optimization in the wireless network, in accordance with an embodiment
5 of the present disclosure.
[0047] FIG. 5 illustrates a computer system in which or with which the
embodiments of the present disclosure may be implemented.
[0048] The foregoing shall be more apparent from the following more
detailed description of the disclosure.
10 LIST OF REFERENCE NUMERALS
100 – Network Architecture
102-1, 102-2…102-N – Users
104-1, 104-2…104-N – User Equipment’s
108 –System
15 106 –Network
202 – One or more processor(s)
204 – Memory
206 – A Plurality of Interface(s)
208 – Processing Engine
20 210 - Database
212 – Data Collection Engine
214 – Computation Engine
216 – Analyzing engine
218 – Notification Engine
25 220 – Other Module(s)
300 – system architecture
302 – Ingestion layer
304 – Compute Layer
11
306 – AI/ML Layer 400 - Method
510 – External Storage Device
520 – Bus
5 530 – Main Memory
540 – Read Only Memory 550 – Mass Storage Device 560 – Communication Port 570 – Processor
10 BRIEF DESCRIPTION OF THE INVENTION
[0049] In the following description, for explanation, various specific details
are outlined in order to provide a thorough understanding of embodiments of the
present disclosure. It will be apparent, however, that embodiments of the present
disclosure may be practiced without these specific details. Several features
15 described hereafter can each be used independently of one another or with any
combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
20 [0050] The ensuing description provides exemplary embodiments only and
is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the
25 function and arrangement of elements without departing from the spirit and scope
of the disclosure as set forth.
[0051] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these
12
specific details. For example, circuits, systems, networks, processes, and other
components may be shown as components in block diagram form in order not to
obscure the embodiments in unnecessary detail. In other instances, well-known
circuits, processes, algorithms, structures, and techniques may be shown without
5 unnecessary detail to avoid obscuring the embodiments.
[0052] Also, it is noted that individual embodiments may be described as a
process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in
10 parallel or concurrently. In addition, the order of the operations may be re-arranged.
A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling
15 function or the main function.
[0053] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not
20 necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term
25 “comprising” as an open transition word without precluding any additional or other
elements.
[0054] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included
13
in at least one embodiment of the present disclosure. Thus, the appearances of the
phrases “in one embodiment” or “in an embodiment” in various places throughout
this specification are not necessarily all referring to the same embodiment.
Furthermore, the particular features, structures, or characteristics may be combined
5 in any suitable manner in one or more embodiments.
[0055] The terminology used herein is to describe particular embodiments
only and is not intended to be limiting the disclosure. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms
10 “comprises” and/or “comprising,” when used in this specification, specify the
presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any combinations of one or more of the
15 associated listed items.
[0056] Bandwidth optimization encompasses a range of strategies and
techniques designed to maximize the efficient utilization of network bandwidth.
Bandwidth, in this context, refers to the capacity of a network to transmit data. By
optimizing bandwidth, network performance can be enhanced, leading to an
20 improved user experience.
[0057] Presently, the allocation of additional bandwidth only occurs in
response to user complaints about network congestion. This reactive approach results in a time-consuming process, as the allocation is based on resource availability, which ultimately diminishes user satisfaction.
25 [0058] To address this issue, the present disclosure leverages cutting-edge
AI and machine learning techniques to proactively detect bandwidth requirements. By doing so, the corresponding team is alerted in advance, enabling them to take timely action. This not only ensures an enhanced user experience but also facilitates proactive management of network resources.
14
[0059] Furthermore, the present disclosure analyzes historical network data,
including call records, bandwidth utilization, and network congestion patterns. This comprehensive analysis provides valuable insights contributing to efficient bandwidth management and improved network performance.
5 [0060] The various embodiments throughout the disclosure will now be
explained in more detail with reference to FIGS. 1-5.
[0061] FIG. 1 illustrates an exemplary network architecture (100) for
implementing a system (108) for bandwidth optimization in a wireless network (106), in accordance with an embodiment of the present disclosure.
10 [0062] As illustrated in FIG. 1, one or more computing devices (104-1, 104-
2…104-N) may be connected to the system (108) through the wireless network (106). A person of ordinary skill in the art will understand that the one or more computing devices (104-1, 104-2…104-N) may be collectively referred as computing devices (104) and individually referred as a computing device (104).
15 One or more users (102-1, 102-2…102-N) may provide one or more requests to the
system (108). A person of ordinary skill in the art will understand that the one or more users (102-1, 102-2…102-N) may be collectively referred as users (102) and individually referred as a user (102). Further, the computing devices (104) may also be referred as a user equipment (UE) (104) or as UEs (104) throughout the
20 disclosure.
[0063] In an embodiment, the computing device (104) may include, but not
be limited to, a mobile, a laptop, etc. Further, the computing device (104) may
include one or more in-built or externally coupled accessories including, but not
limited to, a visual aid device such as a camera, audio aid, microphone, or keyboard.
25 Furthermore, the computing device (104) may include a mobile phone, smartphone,
virtual reality (VR) devices, augmented reality (AR) devices, a laptop, a general-purpose computer, a desktop, a personal digital assistant, a tablet computer, and a mainframe computer. Additionally, input devices for receiving input from the user
15
(102) such as a touchpad, touch-enabled screen, electronic pen, and the like may be used.
[0064] In an embodiment, the network (106) may include, by way of
example but not limitation, at least a portion of one or more networks having one
5 or more nodes that transmit, receive, forward, generate, buffer, store, route, switch,
process, or a combination thereof, etc. one or more messages, packets, signals,
waves, voltage or current levels, some combination thereof, or so forth. The
network (106) may also include, by way of example but not limitation, one or more
of a wireless network, a wired network, an internet, an intranet, a public network, a
10 private network, a packet-switched network, a circuit-switched network, an ad hoc
network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
[0065] In an embodiment, the system (108) may receive and analyse
15 historical data of the network (106). The historical data may include, but not limited
to, call records, bandwidth utilization, and network congestion patterns. The system
(108) may normalize and pre-process the historical data. The system (108) may
extract one or more relevant features such as, but not limited to, network load, time
of day, and user behaviour from the pre-processed historical data to capture call
20 patterns and user characteristics. The system (108) may feed the historical data and
the extracted features to an artificial intelligence (AI)/machine learning (ML)
model, and the AI/ML model may be trained based on the historical data and the
extracted features. The AI/ML model may be trained to predict or understand call
characteristics and bandwidth usage patterns of the network (106). The trained
25 AI/ML model may predict the future bandwidth utilization, generate proactive
recommendations, and notify a network operations team or the users (102) regarding the future bandwidth utilization and the proactive recommendations for optimized bandwidth allocation and network resource planning.
16
[0066] Although FIG. 1 shows exemplary components of the network
architecture (100), in other embodiments, the network architecture (100) may
include fewer components, different components, differently arranged components,
or additional functional components than depicted in FIG. 1. Additionally, or
5 alternatively, one or more components of the network architecture (100) may
perform functions described as being performed by one or more other components of the network architecture (100).
[0067] FIG. 2 illustrates an exemplary block diagram (200) of the system
(108), in accordance with an embodiment of the present disclosure.
10 [0068] Referring to FIG. 2, in an embodiment, the system (108) may
include one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other
15 capabilities, the one or more processor(s) (202) may be configured to fetch and
execute computer-readable instructions stored in a memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a
20 network service. The memory (204) may comprise any non-transitory storage
device including, for example, volatile memory such as random-access memory (RAM), or non-volatile memory such as erasable programmable read only memory (EPROM), flash memory, and the like.
[0069] In an embodiment, the system (108) may include an interface(s)
25 (206). The interface(s) (206) may comprise a variety of interfaces, for example,
interfaces for data input and output devices (I/O), storage devices, and the like. The interface(s) (206) may facilitate communication through the system (108). The interface(s) (206) may also provide a communication pathway for one or more
17
components of the system (108). Examples of such components include, but are not limited to, processing engine(s) (208) and a database (210).
[0070] In an embodiment, the processing engine(s) (208) may be
implemented as a combination of hardware and programming (for example,
5 programmable instructions) to implement one or more functionalities of the
processing engine(s) (208). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage
10 medium, and the hardware for the processing engine(s) (208) may comprise a
processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system may comprise the
15 machine-readable storage medium storing the instructions and the processing
resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry.
20 [0071] Further, the processing engine(s) (208) may include a data collection
engine (212), a computation Engine (214), an analyzing engine (216) (also referred to as a machine leaning (ML) engine), a notification engine (218), and other engine(s) (220). In an embodiment, the other engine(s) (220) may include an input/output engine. The data collection engine (212) may collect historical data of
25 the network (106). The computation engine (214) is configured to normalize and
pre-process the collected data to generate a normalized and pre-processed data. The computation engine (214) is configured to extract and compute relevant features from the normalized and pre-processed data to capture call patterns and user characteristics. The process of extracting and computing relevant features by the
30 computation engine (214) involves several steps. Initially, the computation engine
18
(214) may identify potential features based on domain knowledge of
telecommunications networks and user behavior, selecting those with the potential
to provide insights into call patterns and user characteristics. The engine then
performs statistical analyses on the normalized and pre-processed data to identify
5 features showing significant variation or correlation with network performance and
user behavior. This may include techniques such as correlation analysis, principal
component analysis, and factor analysis. Advanced machine learning techniques,
such as recursive feature elimination or random forest feature importance, may be
employed to automatically identify the most relevant features from the large set of
10 potential features. Finally, the selected features may be reviewed and validated by
domain experts to ensure they align with industry knowledge and business objectives.
[0072] The relevancy of features is determined through a combination of
statistical significance, predictive power, and domain expertise. Features are
15 considered relevant if they demonstrate a strong correlation with the network
performance metrics, user behavior patterns, or bandwidth utilization. The relevancy is quantified using metrics such as correlation coefficients, mutual information scores, or feature importance scores from machine learning models.
[0073] The relevant features extracted and computed by the computation
20 engine (214) may include temporal patterns such as peak usage hours and seasonal
variations, spatial patterns including high-traffic locations and user movement
patterns, user segmentation based on demographics and usage patterns, application
profiling that identifies popular applications and data consumption by application
type, network performance indicators like throughput, latency, and packet loss
25 rates, device characteristics including types and operating systems, call patterns
such as frequency and average duration, and data usage patterns including average
consumption and frequency of high-bandwidth activities. These features are used
to build comprehensive profiles of network usage and demand by creating multi¬
dimensional representations of user behavior and network performance. For
30 example, user mobility patterns are combined with application usage data to predict
19
future bandwidth needs in specific locations. Similarly, network performance metrics are correlated with user demographics to identify segments that may require tailored service offerings or infrastructure upgrades.
[0074] These features may be extracted and computed using various
5 methods. For example, time series analysis is employed for temporal patterns, while
geospatial analysis is used for location-based features. Furthermore, clustering
algorithms facilitate user segmentation, and statistical aggregations such as mean,
median, and standard deviation are used for usage patterns. Signal processing
techniques are applied for network performance indicators, and natural language
10 processing may be utilized to analyze application names and types. By employing
these comprehensive methods, the computation engine (214) ensures that the most relevant and informative features are extracted and computed from the raw data, providing a solid foundation for the subsequent machine learning modelling and bandwidth optimization processes.
15 [0075] The analyzing engine (216) may train an artificial intelligence
(AI)/machine learning (ML) model based on the collected data, predict the future bandwidth utilization, generate proactive recommendations, and notify the network operations team regarding the future bandwidth utilization and proactive recommendations for optimized bandwidth allocation and network resource
20 planning.
[0076] The one or more processor(s) (202) are configured to retrieve and
execute instructions stored in the memory (204). Under the instructions, the data
collection engine (212) systematically collects one or more call data records
(CDRs) (also referred to as data) from one or more network sources to generate a
25 collected data. For example, the CDR encompasses information such as call start
times, durations, participants' identifiers, and other pertinent metadata related to telecommunications transactions. In one embodiment, the CDRs collected by the data collection engine (212) encompass a wide range of call-related information. The CDRs also often contain network element identifiers like cell tower IDs and
20
geographic location data. For instance, a typical CDR entry may be as follows:
"CallID: 12345, OriginatingNumber: +1234567890, DestinationNumber:
+0987654321, StartTime: 2024-07-04 10:15:30, EndTime: 2024-07-04 10:20:45,
Duration: 315 seconds, CellTowerID: CTW001". In addition to CDRs, the data
5 collection engine (212) gathers comprehensive bandwidth usage data. This data
provides insights into network performance and user behavior, including metrics
such as data transfer rates (measured in Mbps or Gbps), total data volume
transferred (in MB or GB), and network utilization percentages. The collected data
also captures peak and off-peak usage periods, application-specific data
10 consumption, and information about device types and operating systems. A
representative bandwidth usage data entry may be as follows: "DeviceID: D789, Timestamp: 2024-07-04 10:15:00, DataRate: 15.6 Mbps, DataVolume: 117 MB, Application: VideoStreaming, NetworkType: 5G".
[0077] Additionally, the data collection engine (212) captures bandwidth
15 usage data, including metrics on data volume transmitted and network traffic
patterns, essential for assessing network performance and optimizing resource
allocation. By employing a systematic data collection process, the data collection
engine (212) is configured to enhance network management capabilities, facilitate
informed decision-making, and ensure efficient operation and service delivery
20 within the network infrastructure. In an embodiment, the data collection engine
(212) is configured to collect and aggregate CDRs and bandwidth usage data from
the one or more network sources including base stations, routers, switches, and
gateways. In an aspect, the data collection engine (212) is configured to receive the
data directly from the user equipment, a plurality of network modules or any other
25 sources (third party source). In an example, the records may also be stored in cloud-
based services, either provided by the network operator or third-party service providers. These could include storage services, databases, and content delivery networks. In another example, the user records may be received from subscriber data management (SDM) systems. The SDM systems manage subscriber data
21
across different generations of networks and may integrate with 5G core network functions to ensure seamless service continuity.
[0078] The computation engine (214) is configured to generate a
normalized and pre-processed data from the collected data. In an example, the
5 collected data may include the one or more call data records (CDRs) collected from
the one or more network sources including base stations, routers, switches, and gateways. In an embodiment, the computation engine (214) is configured to perform data cleaning, normalization, and feature engineering to prepare the normalized and pre-processed data for the ML engine (216). In an example, the
10 computation engine (214) is configured to refine the raw data (CDRs data) obtained
from the data collection engine (212) into a standardized and optimized format to generate the normalized and pre-processed data. Depending on the data type and context, the computation engine (214) is configured to employ techniques such as scaling, standardization, or specific domain-related normalization methods.
15 Normalization typically includes standardizing data formats, units of measurement,
and structures to ensure consistency and compatibility across different datasets and systems.
[0079] For generating the normalized data, the computation engine (214)
is configured to clean the raw data to remove inconsistencies, errors, duplicates, or
20 missing values. This ensures that the data is accurate and reliable for further
processing. Additionally, pre-processing tasks may encompass cleaning the data to eliminate errors, duplicates, or irrelevant entries and transforming the data to enhance computational efficiency and analytical accuracy. By generating normalized and pre-processed data, the computation engine (214) prepares the
25 information for subsequent stages such as statistical analysis, machine learning
models, or database storage. The computation engine (214) is configured to extract and compute one or more relevant features from the normalized and pre-processed data.
22
[0080] The one or more processor(s) (202) are configured to facilitate the
ML engine (216) configured to receive the one or more relevant extracted features
and the collected data. In an embodiment, the one or more relevant features include
at least one or more peak usage hours, high-traffic locations, popular applications
5 and services, user demographics.
[0081] The ML engine (216) is configured to capture one or more call
patterns and user characteristics based on the one or more relevant features and collected data to predict a future bandwidth utilization. In one embodiment, the trained ML model is used to forecast bandwidth consumption across various
10 dimensions of the network. To forecast bandwidth consumption, the ML engine
employs projecting usage for different network segments (such as access, backhaul, and core networks), geographical locations (urban centers, suburban areas, and rural regions), and time periods (hourly, daily, weekly, and seasonal patterns). The ML model analyzes historical data patterns, current usage trends, and external
15 factors like scheduled events or anticipated technological advancements to generate
these multi-faceted predictions.
[0082] The one or more processor(s) (202) are configured to generate one
or more proactive recommendations for optimized bandwidth allocation and network resource planning based on the predicted future bandwidth utilization. The
20 generation of proactive recommendations relies on sophisticated optimization
algorithms to suggest optimal bandwidth allocation strategies and network configurations. These algorithms include linear programming for resource allocation, genetic algorithms for network topology optimization, and reinforcement learning for dynamic bandwidth management. The optimization
25 process considers multiple objectives, such as maximizing network throughput,
minimizing latency, and balancing load across network elements.
[0083] Based on the predicted utilization, the optimization algorithms
generate recommendations such as dynamic bandwidth allocation plans, suggesting how to redistribute available bandwidth across different services or user groups in
23
real-time. They may also propose network reconfiguration strategies, including
optimal routing paths, frequency channel assignments, or antenna tilt adjustments.
Additionally, the optimization algorithms may recommend infrastructure upgrades,
pinpointing where and when new capacity should be added to the network. These
5 data-driven, optimized recommendations enable network operators to proactively
manage their resources, ensuring optimal performance and user experience even as
demand fluctuates. The optimization algorithms may include linear programming
for resource allocation across network segments, genetic algorithms for optimizing
network topology and routing, and reinforcement learning for dynamic bandwidth
10 management. For instance, a genetic algorithm might be used to evolve optimal
configurations for cell tower parameters, while reinforcement learning could be applied to dynamically adjust bandwidth allocation in real-time based on changing network conditions.
[0084] In an embodiment, the ML engine (216) is configured to train an ML
15 model based on the pre-processed data and the computed features. The ML model
is trained to learn call characteristics and bandwidth usage patterns of the network.
In an embodiment, the ML model is trained using historical CDRs, bandwidth usage
patterns, network performance metrics, and other relevant data to identify patterns
and anomalies that indicate potential bandwidth issues and to learn the relationships
20 between various network parameters and bandwidth consumption.
[0085] In an embodiment, the system further includes a notification engine
(218) configured to generate visualizations and reports of the future predicted bandwidth utilization and one or more generated recommendations. The notification engine (218) is configured to notify a network operations team
25 regarding the predicted future bandwidth utilization and the proactive
recommendations. In an embodiment, the ML engine (216) and notification engine (218) collaborate to generate detailed reports, update real-time dashboards, and send timely alerts through various channels such as email, SMS, and dedicated apps. Reports and dashboards include visualizations of predicted bandwidth
30 utilization, comparisons with historical data, and key performance indicators.
24
[0086] The ML engine (216) may provide diverse, data-driven
recommendations tailored to the network's specific needs. These may include
capacity expansion suggestions, traffic management strategies, network
reconfiguration advice, predictive maintenance alerts, and user migration plans. For
5 instance, the system might recommend "Increase backhaul capacity by 25% in the
downtown area within 30 days" or "Implement traffic shaping for non-essential
applications during peak hours (2 PM - 6 PM)." These recommendations may be
generated using a combination of rule-based systems and machine learning models,
considering historical data, current conditions, and predicted future states. The ML
10 engine (216) may continuously refine its recommendations based on the outcomes
of implemented suggestions, creating an improvement feedback loop. This comprehensive approach empowers the network operations team to make informed decisions about capacity planning, resource allocation, and infrastructure upgrades, ultimately optimizing network performance and user experience.
15 [0087] The notification process to the network operations team is
comprehensive and leverages multiple communication channels. The ML engine (216) may generate detailed reports that highlight predicted bandwidth utilization through data visualizations, including graphs and charts that compare forecasted usage with historical data. These reports are automatically compiled and distributed
20 via email or secure file transfer protocols. Simultaneously, the ML engine (216)
may update real-time dashboards accessible through web interfaces or dedicated applications. These dashboards present key performance indicators (KPIs) related to current and predicted bandwidth utilization, allowing team members to monitor network health at a glance and drill down into specific areas of interest.
25 [0088] In addition to reports and dashboards, the system provides a range
of actionable recommendations to optimize network performance. These recommendations are derived from the analysis of collected data and predictive models. They may include capacity expansion suggestions, such as identifying areas that require infrastructure upgrades to meet projected demand.
25
[0089] Traffic management strategies are also proposed, which might
involve implementing Quality of Service (QoS) policies during peak hours or
optimizing data routing. The system may recommend network reconfigurations,
such as load balancing adjustments or spectrum reallocation, to improve overall
5 efficiency. Predictive maintenance alerts are generated to address potential
hardware issues before they impact service quality. Additionally, user migration
plans are suggested to optimize the distribution of users across different network
technologies or frequency bands. These diverse recommendations provide the
network operations team with actionable insights for effective capacity planning,
10 resource allocation, and infrastructure upgrades.
[0090] In an embodiment, the data collection engine (212), the computation
engine (214), and the ML engine (216) are deployed on a cloud-based platform and
accessible via a web-based interface, an Application Programming Interface (API),
and a mobile application, enabling seamless integration with existing network
15 management systems and tools used by the network operations team.
[0091] Although FIG. 2 shows exemplary components of the system (108),
in other embodiments, the system (108) may include fewer components, different
components, differently arranged components, or additional functional components
than depicted in FIG. 2. Additionally, or alternatively, one or more components of
20 the system (108) may perform functions described as being performed by one or
more other components of the system (108).
[0092] FIG. 3 illustrates an exemplary architecture (300) of the system
(108), in accordance with an embodiment of the present disclosure.
[0093] As illustrated in FIG. 3, in an embodiment, the system (108) may
25 include an ingestion layer (302), a compute layer (308), and an AI/ML layer (314).
[0094] In an embodiment, the data collection engine (212) may be part of the
ingestion layer (302) that receives the CDRs and bandwidth usage data from various network sources (as shown by 304). The ingestion layer (302) is responsible for
26
data collection and initial processing, handling the high-volume, real-time data
streams from various network sources. The computation engine (214) may be part
of the compute layer (304) that performs data cleaning, normalization, and feature
engineering to prepare the data for ML modelling. The compute layer (304)
5 performs complex data transformations, including data cleaning, normalization,
and advanced feature engineering, preparing the data for machine learning tasks.
The ML engine (216) may be part of an AI/ML layer (306) that builds and trains
predictive models/ ML Models using the prepared data to forecast future bandwidth
needs. The AI/ML layer (306) is where the core machine learning operations occur,
10 including model training, prediction, and continuous learning from new data. This
layered architecture ensures efficient data flow, scalability, and separation of concerns in the system (102).
[0095] The ingestion layer encompasses the systematic retrieval of data
from diverse data sources, ranging from databases and APIs to streaming platforms
15 and IoT devices. Following collection, the data undergoes transformation, where it
is cleansed, standardized, and enriched to ensure consistency and compatibility with downstream processes. Subsequently, the transformed data is loaded into designated data systems, such as data warehouses or lakes, facilitating accessibility and utilization for analytical insights and decision-making. The ingestion layer is
20 responsible for collecting, ingesting, and initially processing raw data from the
various sources before it is further transformed, stored, or analysed. In an aspect, the ingestion layer acts as an entry point for the data from the data sources into the system (108). In an aspect, the ingestion layer facilitates a seamless and efficient flow of data from the data sources to downstream processing pipelines. The
25 ingestion layer performs various operation such as data collection, data ingestion,
data validation, and data routing.
[0096] The data collection engine (212) is further configured to aggregate
the received data (as shown by 306). In an embodiment, the data collection engine
(212) may collect CDRs and bandwidth usage data for a predefined time period.
30 The computation engine (214) may normalize and pre-process the collected data
27
for feature extraction and further analysis. The computation engine (214) may
extract relevant features from the normalized and pre-processed data to capture call
patterns and user characteristics. The ML engine (216) may be trained using the
pre-processed data and the extracted features to understand the call characteristics
5 and the bandwidth usage patterns of the network (106). The trained ML model may
predict the future bandwidth utilization, generate proactive recommendations, and notify the network operations team using a reporting/notification engine (218) for optimized bandwidth allocation and network resource planning.
[0097] In one embodiment, the data collection engine (212) is configured
10 to collect the CDRs and bandwidth usage data for a predefined time period. CDRs
are detailed records of telephone calls that are generated by telephone exchanges or
other telecommunications equipment. CDRs typically contain various attributes of
a call, such as the originating and destination phone numbers, the duration of the
call, the time the call was made, and other relevant information. Bandwidth usage
15 data refers to information about the amount of data transmitted over a network
connection during a specific time interval. In an example, the specific time may be
one hour, or one day. This data may be collected from various network elements,
such as wireless communication networks, 3G/4G/5G/6G networks, routers,
switches, and gateways, and may include metrics like data transfer rates, packet
20 counts, and network utilization percentages.
[0098] The computation engine (214) may be part of the compute layer
(308) that performs data cleaning, normalization, and feature engineering to prepare the data for ML modelling.
[0099] In an embodiment, the collected CDRs and bandwidth usage data
25 may be normalized and pre-processed by a computation engine (214) for feature
extraction and further analysis (shown by 310). Data normalization is the process of organizing data in a database to reduce data redundancy and improve data integrity. Pre-processing refers to the transformations applied to data before feeding it to a machine learning algorithm. Pre-processing steps may include data cleaning,
28
normalization, transformation, feature extraction and selection, etc. The goal of data pre-processing is to clean and transform raw data into a suitable input format for machine learning models.
[00100] In one embodiment, the computation engine (214) may extract and
5 compute relevant features from the normalized and pre-processed data to capture
call patterns and user characteristics (shown by 312). Feature extraction is the process of transforming raw data into numerical features that can be processed while preserving the information in the original data set. The features extracted may include peak usage hours, high-traffic locations, popular applications and services,
10 user demographics and behaviour patterns, network performance indicators, and
other relevant metrics that can help in predicting bandwidth utilization. The computation engine (214) may use the extracted features as input variables for the ML model to predict bandwidth utilization. By considering a wide range of factors that influence bandwidth consumption, the ML model may provide comprehensive
15 and reliable forecasts.
[00101] The analyzing engine (216) may be part of the AI/ML layer (314)
that builds and trains predictive models/ ML Models using the prepared data to forecast future bandwidth needs.
[00102] In an aspect, the analyzing engine is configured to employ model
20 selection (shown by 316). The analyzing engine evaluates and chooses the most
effective ML algorithm. In an aspect, the model selection involves exploring a
variety of models, ranging from simple to complex, and assessing their performance
using appropriate evaluation metrics. By leveraging techniques such as cross-
validation and hyperparameter tuning, the analyzing engine aims to identify the
25 model that not only achieves high accuracy and reliability in predicting outcomes
like future bandwidth utilization based on historical data and extracted features, but also demonstrates robustness and generalizability across different datasets and scenarios. This process ensures that the selected model is optimized to provide actionable insights, enhance decision-making processes, and support efficient
29
resource allocation within the operational framework. Furthermore, the analyzing
engine (216) is configured to train an ML model based on the pre-processed data
and computed features (as shown by 318). Machine learning is a branch of artificial
intelligence (AI) focused on building applications that learn from data and improve
5 their accuracy over time without being programmed to do so. The ML model may
be trained to learn call characteristics and bandwidth usage patterns of the network using various supervised and unsupervised learning algorithms. The training process may involve feeding the pre-processed data and extracted features into the ML model and iteratively adjusting the model parameters to minimize the
10 difference between the predicted and actual outputs. The ML model may be trained
using historical CDRs, bandwidth usage patterns, network performance metrics, and other relevant data to identify patterns and anomalies that indicate potential bandwidth issues. By learning the relationships between various network parameters and bandwidth consumption, the ML model may provide more accurate
15 predictions and recommendations for optimizing network performance. In an
aspect, the ML model identifies patterns and anomalies indicating potential bandwidth issues through various techniques. These include time series analysis to detect unusual spikes or drops in bandwidth usage, clustering algorithms to group similar usage patterns and identify outliers, and anomaly detection algorithms that
20 learn the 'normal' network behaviour and flag deviations. For example, the ML
model might identify a sudden increase in bandwidth consumption in a specific area as an anomaly, potentially indicating a local event or network issue.
[00103] Once trained, the ML model may be used to predict future
bandwidth utilization (as shown by 320) and generate proactive recommendations
25 for optimized bandwidth allocation and network resource planning. Bandwidth
utilization refers to the amount of data transmitted over a network connection compared to the maximum amount that the connection can handle. By analyzing historical bandwidth usage patterns and identifying trends and anomalies, the ML model may forecast future bandwidth requirements and suggest actions to optimize
30 network performance and user experience.
30
[00104] In an aspect where the system is configured to determine whether
additional bandwidth is required or not (as shown by 320). Using the analyzing
engine, the system accurately assesses the incremental bandwidth needed to meet
anticipated demands or mitigate potential network congestion. To determine the
5 requirement of the additional bandwidth, the system is configured to perform data
analytics and machine learning techniques to analyze historical usage patterns, current network conditions, and predicted future requirements based on user behaviour and application demands. By systematically evaluating these factors, the system aims to proactively identify and quantify the additional bandwidth necessary
10 to maintain optimal performance levels and user satisfaction. If additional
bandwidth is required, the system reports to the network operations team (as shown by 322). In one embodiment, the analyzing engine (216) may notify the network operations team of the predicted future bandwidth utilization and proactive recommendations through a notification engine (218). The notification engine (218)
15 may generate visualizations, reports, alerts, and notifications related to predicted
bandwidth utilization and recommendations, and send them to the network operations team through various communication channels, including email, SMS, and dashboard interfaces. These notifications may enable the network operations team to take timely actions to prevent network congestion, allocate resources
20 efficiently, and ensure optimal network performance. The predicted future
bandwidth utilization may enable the network operations team to proactively allocate and provision bandwidth resources, while the proactive recommendations may provide actionable insights for capacity planning and infrastructure upgrades. By anticipating future bandwidth requirements and taking proactive measures, the
25 system (108) may help prevent network congestion, improve network reliability,
reduce latency and downtime, and enhance overall user experience.
[00105] In an embodiment, the data collection engine (212), the computation
engine (214), and the analyzing engine (216) may be deployed on a cloud-based
platform, making these modules accessible via web-based interfaces, APIs, and
30 mobile applications. This deployment approach may enable seamless integration
31
with existing network management systems and tools used by the network operations team, allowing for easy adoption and utilization of the bandwidth optimization capabilities provided by the system (108).
[00106] The proactive bandwidth optimization facilitated by the system
5 (108) may lead to several benefits for the wireless network and its users. By
anticipating and addressing potential bandwidth issues before they impact network
performance, the system (108) may improve network reliability, reduce instances
of network congestion, and minimize service disruptions. This proactive approach
may also help in reducing latency and downtime, as the network operations team
10 can take corrective actions before problems escalate.
[00107] Moreover, the system (108) may enhance overall user experience by
ensuring that adequate bandwidth is available to meet user demands. By optimizing
bandwidth allocation based on predicted usage patterns and network conditions, the
system (108) may provide users with consistent and high-quality service, even
15 during peak usage periods. This may lead to increased user satisfaction and loyalty,
as users may appreciate the seamless and uninterrupted connectivity provided by the wireless network.
[00108] The system (108) may also enable more efficient utilization of
network resources by allowing the network operations team to allocate bandwidth
20 dynamically based on real-time and forecasted demands. This may help in reducing
overprovisioning of resources, which can be costly and wasteful, while also avoiding under provisioning, which can lead to network congestion and poor user experience. By striking a balance between capacity and demand, the system (108) may help in optimizing network performance and cost-effectiveness.
25 [00109] Another potential benefit of the system (108) is the ability to provide
actionable insights for capacity planning and infrastructure upgrades. The proactive recommendations generated by the analyzing engine (216) may help the network operations team make informed decisions about when and where to invest in network expansions and upgrades. By aligning network investments with
32
anticipated bandwidth requirements, the system (108) may help in optimizing capital expenditure and ensuring that the network infrastructure keeps pace with evolving user needs.
[00110] The system (108) may also facilitate a more data-driven approach to
5 network management by leveraging the power of machine learning and big data
analytics. By continuously collecting and analyzing large volumes of network data,
the system (108) may enable the network operations team to gain deeper insights
into network performance, user behaviour, and bandwidth consumption patterns.
These insights may help in identifying trends, anomalies, and opportunities for
10 optimization, leading to more effective and proactive network management
strategies.
[00111] In addition to the benefits for the network operations team, the
system (108) may also provide value to other stakeholders in the wireless network
ecosystem. For example, the insights generated by the system (108) may be useful
15 for network planning and design teams in making informed decisions about
network architecture, capacity provisioning, and technology investments. The system (108) may also provide valuable data for marketing and customer service teams to better understand user preferences and behaviour, enabling them to develop targeted plans and offerings.
20 [00112] The machine learning capabilities of the system (108) may also
enable it to continuously learn and adapt to changing network conditions and user demands. As new data is collected and analyzed, the ML model may refine its predictions and recommendations, becoming more accurate and effective over time. This self-learning aspect of the system (108) may make it more resilient to evolving
25 network dynamics and help in maintaining optimal performance in the face of
changing circumstances.
[00113] Overall, the system (108) for bandwidth optimization in a wireless
network may provide a comprehensive and proactive solution for managing and optimizing network resources. By leveraging the power of machine learning and
33
data analytics, the system (108) may enable the network operations team to
anticipate and address potential bandwidth issues before they impact network
performance and user experience. This proactive approach may lead to improved
network reliability, reduced latency and downtime, enhanced user satisfaction, and
5 more efficient utilization of network resources. As the demand for high-quality and
reliable wireless connectivity continues to grow, the system (108) may play a crucial role in ensuring that wireless networks can meet the evolving needs of users while maintaining optimal performance and cost-effectiveness.
[00114] FIG. 4 illustrates an exemplary flow diagram of a method (400) for
10 bandwidth optimization in the wireless network, in accordance with an embodiment
of the present disclosure.
[00115] In an embodiment, the method (400) may be implemented using a
distributed architecture with multiple data processing and storage nodes. This distributed architecture may enable scalable and efficient processing of large
15 volumes of network data in real-time. By distributing the data processing and
storage tasks across multiple nodes, the method (400) may be able to handle the growing volume and velocity of network data, ensuring timely analysis and decision-making. The distributed architecture may also provide fault tolerance and high availability, minimizing the impact of hardware or software failures on the
20 overall system performance.
[00116] At step 402, the data collection engine (212) is configured to collect
call data records (CDRs) and bandwidth usage data for a predefined time period.
The predefined time period refers to a specific duration of time that is
predetermined or predefined based on the requirements, context, or operational
25 needs of a system, process, or activity. In an example, the predefined time period
may be daily, weekly, or monthly. CDRs are detailed records of telephone calls that are generated by telephone exchanges or other telecommunications equipment. CDRs typically contain various attributes of a call, such as the originating and destination phone numbers, the duration of the call, the time the call was made, and
34
other relevant information. Bandwidth usage data refers to information about the
amount of data transmitted over a network connection during the specific time
interval. This data may be collected from various network elements, such as base
stations, routers, switches, and gateways, and may include metrics like data transfer
5 rates, packet counts, and network utilization percentages.
[00117] In an embodiment, the collection of CDRs and bandwidth usage data
involves a comprehensive data gathering process from various network elements.
This process may utilize specialized data collection agents deployed across the
network infrastructure. The agent’s interface with base stations to collect radio
10 access network data, routers and switches to gather core network statistics, and
gateways to capture inter-network traffic information. The data is collected using standardized protocols such as SNMP (Simple Network Management Protocol) for network device monitoring, IPDR (Internet Protocol Detail Record) for service usage data, and proprietary APIs provided by equipment vendors.
15 [00118] At step 404, the system is configured to normalize and pre-process
the collected data using the computation engine. Step 404 involves transforming raw data into a standardized and structured format that enhances its consistency, compatibility, and utility for subsequent analysis or application within the system. Data normalization is the process of organizing data in a database to reduce data
20 redundancy and improve data integrity. Pre-processing refers to the transformations
applied to data before feeding it to a machine learning algorithm. Pre-processing steps may include data cleaning, normalization, transformation, feature extraction and selection, etc. The goal of the data pre-processing is to clean and transform raw data into a suitable input format for machine learning models.
25 [00119] At step 406, the system is configured to extract and compute one or
more relevant features from the data that has been previously normalized and pre-processed. At step 406, the computation engine (214) is further configured to extract and compute the one or more relevant features from the normalized and pre-processed data to capture call patterns and user characteristics. The computation
35
engine identifies and selects features from the normalized and pre-processed data
that are pertinent to the objectives of the process. Feature extraction is the process
of transforming raw data into numerical features that can be processed while
preserving the information in the original data set. The features extracted may
5 include network performance metrics such as throughput, latency, jitter, and packet
loss, as well as user behaviour patterns such as data usage, application preferences,
and mobility patterns. These features may be used to build comprehensive profiles
of network usage and demand. In an aspect, the computation engine transforms or
enhances the extracted features through techniques like aggregation, dimensionality
10 reduction (e.g., principal component analysis), or feature engineering to derive new
attributes that enhance the predictive power or interpretability of the data.
[00120] In one embodiment, extracting and computing relevant features
involve applying a range of statistical analysis and data mining techniques. These techniques are implemented using big data processing frameworks such as Apache
15 Spark or Hadoop, enabling efficient processing of large-scale network data.
Statistical analysis methods include descriptive statistics to summarize data characteristics, inferential statistics to test hypotheses about network behavior, and time series analysis to identify trends and seasonality in bandwidth usage. Data mining techniques employed include clustering algorithms (e.g., K-means,
20 DBSCAN) to group similar usage patterns, association rule mining to discover
relationships between different network events, and principal component analysis for dimensionality reduction of high-dimensional network data.
[00121] The computation engine (214) identifies significant patterns and
correlations in the data through these analytical approaches. These may include
25 temporal patterns such as daily or weekly cycles in bandwidth consumption, spatial
patterns revealing high-usage hotspots or areas of network congestion, and user behavior patterns indicating preferred services or applications. The computation engine (214) also detects correlations between different network metrics, such as the relationship between user density and bandwidth demand or between specific
30 types of network events and subsequent changes in traffic patterns. These identified
36
patterns and correlations form the basis for feature engineering, where the most informative attributes are selected or created to serve as inputs for the machine learning models used in subsequent stages of the bandwidth optimization process.
[00122] At step 408, the system is configured to facilitate an analyzing
5 engine. The analyzing engine utilizes the one or more relevant extracted features
along with the collected data to capture call patterns and user characteristics. These captured insights are then used to predict future bandwidth utilization. The analyzing engine (216) is configured to train the ML model using the extracted features and pre-processed data. The ML model may learn the call characteristics
10 and bandwidth usage patterns of the network using various supervised and
unsupervised learning algorithms. Supervised learning algorithms learn from labelled data, where the input data is accompanied by the corresponding output labels. Unsupervised learning algorithms, on the other hand, learn from unlabelled data and attempt to discover hidden patterns or structures in the data. The choice of
15 learning algorithm may depend on the nature of the data and the specific
requirements of the bandwidth optimization task.
[00123] In an embodiment, the analyzing engine (216) is further configured
for predicting future bandwidth utilization using the trained ML model. The trained
ML model may consider various factors such as historical bandwidth usage
20 patterns, network performance metrics, user behaviour patterns, and other relevant
features to forecast bandwidth consumption for different network segments, locations, and time periods. By analyzing these factors, the ML model may identify trends, patterns, and anomalies that can help in predicting future bandwidth requirements accurately.
25 [00124] At step 410, the analyzing engine (216) is further configured to
generate one or more proactive recommendations for optimized bandwidth allocation and network resource planning based on the predicted bandwidth utilization. These recommendations may involve using optimization algorithms to suggest optimal bandwidth allocation strategies and network configurations that
37
can meet the predicted bandwidth demand while minimizing network congestion
and maximizing network performance. The optimization algorithms may consider
various constraints such as available network resources, quality of service
requirements, and cost considerations to generate feasible and effective
5 recommendations.
[00125] It must be noted that the generation of proactive recommendations by the
ML engine (216) is a sophisticated process that leverages the insights gained from
the predictive models and historical data analysis. The ML engine employs a
combination of rule-based systems and advanced machine learning algorithms to
10 formulate these recommendations. Initially, it processes the predicted bandwidth
utilization data through a set of predefined rules that encapsulate industry best practices and expert knowledge. These rules help in identifying potential areas of concern, such as network segments approaching capacity limits or regions with rapidly growing demand.
15 [00126] Building upon this initial assessment, the ML engine (216) then utilizes
more complex algorithms to generate specific, actionable recommendations. For instance, ML engine (216) may employ reinforcement learning techniques to simulate various network configuration scenarios and determine optimal settings for different network parameters. The ML engine (216) might use genetic
20 algorithms to evolve potential solutions for network topology optimization,
considering factors such as load balancing, redundancy, and cost-effectiveness. Additionally, the ML engine (216) leverages natural language processing capabilities to translate complex data patterns into human-readable recommendations, ensuring that the output is easily interpretable by network
25 operations teams.
[00127] The recommendations generated by the ML engine (216) are diverse and tailored to address different aspects of network optimization. These may include suggestions for dynamic bandwidth allocation strategies, proposing how to redistribute available bandwidth across services or user groups in real-time to
38
maximize efficiency. The ML engine (216) might recommend specific
infrastructure upgrades, pinpointing where and when new capacity should be added
to the network based on projected demand. The ML engine (216) also suggests
changes to traffic routing policies, identifies opportunities for implementing edge
5 computing solutions to reduce latency, or proposes or proposes strategies for
migrating users between different network technologies (e.g., from 3G to 6G) to
alleviate congestion. Each recommendation is accompanied by quantitative metrics
indicating its potential impact on network performance and user experience,
allowing network operators to prioritize and implement the most effective
10 solutions.
[00128] In an embodiment, the analyzing engine (216) is further configured for notifying the network operations team of the predicted future bandwidth utilization and the proactive recommendations. The proactive recommendations may include specific actions such as adding new network capacity, optimizing network
15 configurations, migrating users to different network technologies, and
implementing traffic shaping and prioritization policies. Adding new network capacity may involve deploying additional hardware resources such as base stations, routers, and switches to handle the increased bandwidth demand. Optimizing network configurations may involve fine-tuning various network
20 parameters such as frequency bands, modulation schemes, and power levels to
improve network efficiency and performance. Furthermore, migrating users to different network technologies may involve transitioning users from older, slower technologies to newer, faster ones such as 5G, 6G, or Wi-Fi 6 to improve bandwidth capacity and user experience. Implementing traffic shaping and prioritization
25 policies may involve applying rules and algorithms to prioritize critical traffic flows
and limit non-essential traffic during peak usage periods to ensure fair allocation of bandwidth resources. Specific examples of traffic shaping and prioritization policies include implementing Quality of Service (QoS) tiers that prioritize critical services during peak hours, applying rate limiting to high-bandwidth, non-essential
30 applications, and dynamically adjusting data packet routing based on real-time
39
network conditions. For instance, during a video streaming surge, the ML engine (216) might recommend temporarily reducing the bandwidth allocated to background data syncing to ensure smooth video playback for users.
[00129] Furthermore, the notification may involve generating reports,
5 dashboards, and alerts that highlight the key findings and actionable insights. The
reports may provide a clear and concise summary of the predicted bandwidth
utilization trends and patterns, along with visualizations such as graphs and charts
to facilitate easy interpretation. The dashboards may present real-time data on
network performance and bandwidth consumption, allowing the network
10 operations team to monitor the network closely. The alerts may be triggered when
certain predefined thresholds or anomalies are detected, prompting the team to take immediate corrective actions.
[00130] The method (400) may also involve the periodic execution of steps 402 to 410 to enable continuous monitoring and optimization of the network. By
15 repeatedly collecting, analyzing, and acting upon network data, the method (400)
may adapt to changing network conditions and user demands in real-time. This iterative process may provide a closed-loop feedback mechanism for network management and optimization, allowing the system to learn from its previous actions and improve its performance over time. This closed-loop feedback
20 mechanism works by continuously comparing the outcomes of implemented
recommendations against predicted results. For example, if the ML engine (216) recommends increasing bandwidth in a specific area, it monitors the actual impact on network performance and user experience. Any discrepancies between predicted and actual outcomes are fed back into the ML model, adjusting its parameters, and
25 improving future predictions. This iterative process ensures that the
recommendations become increasingly accurate and effective over time, adapting to the dynamic nature of network usage and conditions.
[00131] The continuous monitoring and optimization enabled by the method
(400) may help in identifying and resolving network issues proactively, before they
40
impact the end-users. By detecting anomalies and deviations from normal network behaviour early, the method (400) may enable the network operations team to take corrective actions in a timely manner, minimizing service disruptions and ensuring a seamless user experience.
[00132] The method (400) may also facilitate capacity planning and resource
allocation decisions by providing accurate and reliable predictions of future bandwidth requirements. By forecasting bandwidth consumption trends and patterns, the method (400) may help network operators to plan their network expansions and upgrades more effectively, ensuring that adequate capacity is available to meet the growing demand. This may result in more efficient utilization of network resources, reduced capital and operational expenditures, and improved return on investment.
[00133] Moreover, the method (400) may enable network operators to offer
differentiated services and pricing plans based on user behaviour and network usage patterns. By analyzing user data and preferences, the method (400) may help in identifying user segments with distinct bandwidth requirements and tailoring services accordingly. This may lead to more personalized and value-added services, enhancing user satisfaction and loyalty. The method (400) may also contribute to improved network security by detecting and mitigating potential security threats in real-time.
[00134] In an embodiment, by monitoring network traffic patterns and
identifying anomalous behaviour, the method (400) may help in detecting and preventing malicious activities such as distributed denial-of-service (DDoS) attacks, which can cause significant network congestion and service disruptions. The proactive recommendations generated by the method (400) may include security measures such as traffic filtering, rate limiting, and access control to protect the network and its users from security breaches.
[00135] In addition to improving network performance and security, the
method (400) may also have environmental benefits. By optimizing bandwidth
allocation and reducing network congestion, the method (400) may help reduce the energy consumption of network devices and data centres. This may lead to lower carbon emissions and contribute to the overall sustainability goals of the organization.
[00136] The method (400) may also enable network operators to gain
valuable insights into user behaviour and preferences, which can be used for marketing and customer relationship management purposes. By analyzing user data and network usage patterns, network operators may identify opportunities for cross-selling and up-selling services, as well as personalizing marketing campaigns and promotions. This may lead to increased revenue and customer loyalty, as well as a better understanding of customer needs and expectations.
[00137] Furthermore, the method (400) may provide a competitive
advantage to network operators by enabling them to offer superior network performance and user experience compared to their competitors. By proactively optimizing bandwidth allocation and minimizing network congestion, network operators may be able to differentiate themselves in the market and attract more customers. This may lead to increased market share, revenue growth, and profitability.
[00138] In conclusion, the method (400) for bandwidth optimization in a
wireless network presents a comprehensive and proactive approach to network management and optimization. By leveraging advanced technologies such as machine learning and big data analytics, the method (400) may enable network operators to predict future bandwidth requirements accurately, allocate network resources efficiently, and optimize network performance in real time. The proactive recommendations generated by the method (400) may help minimize network congestion, improve user experience, reduce costs, and enhance network security. The continuous monitoring and optimization enabled by the method (400) may provide a closed-loop feedback mechanism for network management, allowing the system to adapt to changing network conditions and user demands. Overall, the
method (400) may provide significant benefits to network operators, end-users, and the environment, contributing to the success and sustainability of the wireless network ecosystem.
[00139] In an embodiment, the user equipment (104) may be connected to a
system (108) for bandwidth optimization in a wireless network. The user equipment (104) is configured to collect call data records (CDRs) and bandwidth usage data over a predefined time period. CDRs contain detailed information about telephone calls, such as the source and destination phone numbers, call duration, and timestamp. After collecting the CDRs and bandwidth usage data, the user equipment (104) sends this information to the system (108). The system (108) then uses the received data to optimize bandwidth allocation in the wireless network, as described with reference to FIG. 3. In summary, the user equipment gathers relevant data and transmits the gathered data to the system (108), enabling the system (108) to analyze the data and make informed decisions to improve network performance and efficiency.
[00140] FIG. 5 illustrates an example computer system (500) in which or
with which the embodiments of the present disclosure may be implemented.
[00141] As shown in FIG. 5, the computer system (500) may include an
external storage device (510), a bus (520), a main memory (530), a read-only memory (540), a mass storage device (550), a communication port(s) (560), and a processor (570). A person skilled in the art will appreciate that the computer system (500) may include more than one processor and communication ports. The processor (570) may include various modules associated with embodiments of the present disclosure. The communication port(s) (560) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication ports(s) (560) may be chosen depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (500) connects.
[00142] In an embodiment, the main memory (530) may be Random Access
Memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (540) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (570). The mass storage device (550) may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
[00143] In an embodiment, the bus (520) may communicatively couple the
processor(s) (570) with the other memory, storage, and communication blocks. The bus (520) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system (500).
[00144] In another embodiment, operator, and administrative interfaces, e.g.,
a display, keyboard, and cursor control device, may also be coupled to the bus (520) to support direct operator interaction with the computer system (500). Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) (560). The components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system (500) limit the scope of the present disclosure.
[00145] The present disclosure introduces significant technical
advancements aimed at overcoming traditional limitations. Bandwidth requirements are typically addressed reactively, often in response to user
complaints about network congestion related to various networks (4G, 5G, 6G and so on). By leveraging AI/ML techniques, the disclosure introduces a proactive approach to detecting and addressing bandwidth needs pre-emptively. This innovation enables the system to analyze historical data, network usage patterns, and real-time metrics to predict future bandwidth requirements accurately. The system is able to notify network management teams in advance, allowing them to take proactive measures such as capacity planning, infrastructure upgrades, and bandwidth provisioning. By anticipating potential network congestion and identifying areas requiring additional bandwidth ahead of time, the disclosure ensures a seamless user experience by minimizing disruptions and latency issues.
[00146] While considerable emphasis has been placed herein on the
preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.
ADVANTAGES OF THE INVENTION
[00147] The present disclosure provides key advantages of optimizing
bandwidth allocation in wireless networks by leveraging advanced technologies such as artificial intelligence (AI) and machine learning (ML).
[00148] The AI/ML-based approach employed by the present disclosure
enables the system to detect bandwidth requirements intelligently and notify the relevant team or users in advance. This proactive approach allows network operators to take timely actions to address potential bandwidth issues before they impact the user experience. By ensuring adequate bandwidth availability, the system and method can significantly enhance user satisfaction and loyalty.
[00149] Another significant benefit of the present disclosure is its ability to
facilitate proactive bandwidth provisioning and infrastructure upgrades. By predicting network congestion and identifying areas that may require additional bandwidth, the system and method enable network operators to make informed decisions about capacity planning and resource allocation. This proactive approach helps prevent network bottlenecks and ensure a seamless user experience, even during peak usage periods.
[00150] The present disclosure also distinguishes itself by its capability to
predict and proactively fulfil additional bandwidth and capacity requirements based on anticipated future demand. By leveraging AI/ML techniques to analyze historical data and identify usage patterns, the system and method can forecast future bandwidth needs accurately. This predictive capability allows network operators to stay ahead of the curve and make necessary provisions to meet the growing demand, ensuring that the network remains resilient and responsive to user needs.
[00151] Moreover, the AI/ML-based approach for bandwidth optimization
employed by the present disclosure has the potential to drive revenue growth for network operators. By ensuring optimal network performance and user experience, the system and method can help in attracting and retaining customers, leading to increased market share and profitability. Additionally, the insights gained from analyzing network data can be leveraged to develop targeted marketing strategies and personalized service offerings, further enhancing revenue generation opportunities.
WE CLAIM:
1. A system (108) for bandwidth optimization in a wireless network, the
system (108) comprising:
a memory (204);
one or more processor(s) (202) configured to fetch and execute instructions stored in the memory (204) to:
collect, by a data collection engine (212), one or more call data records (CDRs) and bandwidth usage data for a predefined time period to generate a collected data;
normalize and pre-process, by a computation engine (214), the collected data to generate a normalized and pre-processed data;
extract and compute, by the computation engine (214), one or more relevant features from the normalized and pre-processed data;
facilitate an analyzing engine (216) configured to receive the one or more relevant extracted features and the collected data and is further configured to capture one or more call patterns and user characteristics based on the one or more relevant features and collected data to predict a future bandwidth utilization; and
generate one or more proactive recommendations for optimized bandwidth allocation and network resource planning based on the predicted future bandwidth utilization.
2. The system (108) of claim 1, wherein the analyzing engine (216) is
configured to train a ML model based on the pre-processed data and the
computed features, wherein the ML model is trained to learn call characteristics and bandwidth usage patterns of the network.
3. The system (108) of claim 2, wherein the ML model is trained using historical CDRs, bandwidth usage patterns, network performance metrics, and other relevant data to identify patterns and anomalies that indicate potential bandwidth issues and to learn the relationships between various network parameters and bandwidth consumption.
4. The system (108) of claim 1, wherein the data collection engine (212) is configured to collect and aggregate CDRs and bandwidth usage data from one or more network sources including base stations, routers, switches, and gateways.
5. The system (108) of claim 1, wherein the computation engine (214) is configured to perform data cleaning, normalization, and feature engineering to prepare the normalized and pre-processed data for the analyzing engine (216).
6. The system (108) of claim 1, further includes a notification engine (218) configured to generate visualizations and reports of the future predicted bandwidth utilization and one or more generated recommendations.
7. The system (108) of claim 6, wherein the notification engine (218) is configured to notify a network operations team regarding the predicted future bandwidth utilization and the proactive recommendations.
8. The system (108) of claim 1, wherein the one or more relevant features include at least one or more peak usage hours, high-traffic locations, popular applications and services, user demographics.
9. The system (108) of claim 1, wherein the data collection engine (212), the computation engine (214), and the analyzing engine (216) are deployed on a cloud-based platform and accessible via a web-based interface, an Application Programming Interface (API), and a mobile application,
enabling seamless integration with existing network management systems and tools used by the network operations team.
10. A method (400) for bandwidth optimization in a wireless network, the
method comprising:
collecting (402), by a data collection engine (212), one or more call data records (CDRs) and bandwidth usage data for a predefined time period to generate a collected data;
normalizing and pre-processing (404), by a computation engine (214), the collected data to generate a normalized and pre-processed data;
extracting and computing (406), by the computation engine (214), one or more relevant features from the normalized and pre-processed data;
facilitating (408) an analyzing engine (216) for receiving the one or more relevant extracted features and the collected data and capturing one or more call patterns and user characteristics based on the one or more relevant features and collected data to predict a future bandwidth utilization; and
generating (410) one or more proactive recommendations for optimized bandwidth allocation and network resource planning based on the predicted future bandwidth utilization.
11. The method (400) of claim 10, further comprising training, by the analyzing engine (216), a ML model based on the pre-processed data and the computed features to learn call characteristics and bandwidth usage patterns of the network.
12. The method (400) of claim 10, further comprising notifying, by the analyzing engine (216), a network operations team regarding the predicted future bandwidth utilization and the proactive recommendations.
13. The method (400) of claim 10, wherein the step of collecting the CDRs and bandwidth usage data involves gathering data from one or more network sources including base stations, routers, switches, and gateways.
14. The method (400) of claim 10, wherein the step of extracting and computing the one or more relevant features involves applying one or more statistical analysis and data mining techniques to identify significant patterns and correlations in the pre-processed data.
15. The method (400) of claim 10, wherein the step of predicting the future bandwidth utilization involves using the trained ML model to forecast bandwidth consumption for different network segments, locations, and time periods.
16. The method (400) of claim 10, wherein the step of generating the one or more proactive recommendations involves using optimization algorithms to suggest optimal bandwidth allocation strategies and network configurations based on the predicted utilization.
17. The method (400) of claim 12, wherein the step of notifying the network operations team involves generating reports, dashboards, and alerts that highlight the predicted bandwidth utilization and one or more generated recommendations.
18. The method (400) of claim 10, wherein the one or more relevant features extracted in step (406) include at least one or more peak usage hours, high-traffic locations, popular applications and services, user demographics.
19. The method (400) of claim 10, wherein the one or more proactive recommendations include adding new network capacity, optimizing network configurations, migrating users to different network technologies, and implementing traffic shaping and prioritization policies, wherein the recommendations enable the network operator to optimize network performance and user experience.
20. A user equipment (104) communicatively coupled to a system (108) for support bandwidth optimization in a wireless network, wherein the user equipment is configured to:
collect call data records (CDRs) and bandwidth usage data for a predefined time period; and
transmit the CDRs and bandwidth usage data to the system (108), wherein the system (108) is configured for bandwidth optimization in the wireless network as claimed in claim 1.
| # | Name | Date |
|---|---|---|
| 1 | 202321051146-STATEMENT OF UNDERTAKING (FORM 3) [29-07-2023(online)].pdf | 2023-07-29 |
| 2 | 202321051146-PROVISIONAL SPECIFICATION [29-07-2023(online)].pdf | 2023-07-29 |
| 3 | 202321051146-FORM 1 [29-07-2023(online)].pdf | 2023-07-29 |
| 4 | 202321051146-DRAWINGS [29-07-2023(online)].pdf | 2023-07-29 |
| 5 | 202321051146-DECLARATION OF INVENTORSHIP (FORM 5) [29-07-2023(online)].pdf | 2023-07-29 |
| 6 | 202321051146-FORM-26 [25-10-2023(online)].pdf | 2023-10-25 |
| 7 | 202321051146-Request Letter-Correspondence [03-06-2024(online)].pdf | 2024-06-03 |
| 8 | 202321051146-Power of Attorney [03-06-2024(online)].pdf | 2024-06-03 |
| 9 | 202321051146-FORM-26 [03-06-2024(online)].pdf | 2024-06-03 |
| 10 | 202321051146-FORM 13 [03-06-2024(online)].pdf | 2024-06-03 |
| 11 | 202321051146-Covering Letter [03-06-2024(online)].pdf | 2024-06-03 |
| 12 | 202321051146-AMENDED DOCUMENTS [03-06-2024(online)].pdf | 2024-06-03 |
| 13 | 202321051146-ENDORSEMENT BY INVENTORS [08-07-2024(online)].pdf | 2024-07-08 |
| 14 | 202321051146-DRAWING [08-07-2024(online)].pdf | 2024-07-08 |
| 15 | 202321051146-CORRESPONDENCE-OTHERS [08-07-2024(online)].pdf | 2024-07-08 |
| 16 | 202321051146-COMPLETE SPECIFICATION [08-07-2024(online)].pdf | 2024-07-08 |
| 17 | Abstract-1.jpg | 2024-08-09 |
| 18 | 202321051146-ORIGINAL UR 6(1A) FORM 26-160924.pdf | 2024-09-23 |
| 19 | 202321051146-FORM 18 [04-10-2024(online)].pdf | 2024-10-04 |
| 20 | 202321051146-FORM-9 [05-11-2024(online)].pdf | 2024-11-05 |
| 21 | 202321051146-FORM 18A [06-11-2024(online)].pdf | 2024-11-06 |
| 22 | 202321051146-FORM 3 [11-11-2024(online)].pdf | 2024-11-11 |
| 23 | 202321051146-FER.pdf | 2024-12-18 |
| 24 | 202321051146-OTHERS [19-02-2025(online)].pdf | 2025-02-19 |
| 25 | 202321051146-FORM 3 [19-02-2025(online)].pdf | 2025-02-19 |
| 26 | 202321051146-FER_SER_REPLY [19-02-2025(online)].pdf | 2025-02-19 |
| 27 | 202321051146-CLAIMS [19-02-2025(online)].pdf | 2025-02-19 |
| 28 | 202321051146-US(14)-HearingNotice-(HearingDate-29-04-2025).pdf | 2025-04-08 |
| 29 | 202321051146-FORM-26 [28-04-2025(online)].pdf | 2025-04-28 |
| 30 | 202321051146-Correspondence to notify the Controller [28-04-2025(online)].pdf | 2025-04-28 |
| 31 | 202321051146-Written submissions and relevant documents [14-05-2025(online)].pdf | 2025-05-14 |
| 32 | 202321051146-Retyped Pages under Rule 14(1) [14-05-2025(online)].pdf | 2025-05-14 |
| 33 | 202321051146-2. Marked Copy under Rule 14(2) [14-05-2025(online)].pdf | 2025-05-14 |
| 34 | 202321051146-PatentCertificate26-05-2025.pdf | 2025-05-26 |
| 35 | 202321051146-IntimationOfGrant26-05-2025.pdf | 2025-05-26 |
| 1 | Search_Strategy_202321051146E_17-12-2024.pdf |