Sign In to Follow Application
View All Documents & Correspondence

Bandwidth Management Method In O Ran

Abstract: ABSTRACT The present disclosure provides a bandwidth management method in an O-RAN (200) by a network entity (400). The bandwidth management method includes computing a fronthaul bandwidth of a fronthaul link. Further, the bandwidth management method includes determining utilization of the fronthaul link based on the computed fronthaul bandwidth and the capacity of the network entity (400). Further, the bandwidth management method includes implementing synchronized bandwidth management in response to determining when the fronthaul link is one of an underutilized and overutilized. The network entity (400) can be, for example, but is not limited to an Open Distributed Unit (O-DU) (114) and an Open Radio Unit (O-RU) (116), where the O-DU (114) and the O-RU (116) are connected with each other through one of direct connection and a bridged connection. FIG. 5

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
30 September 2022
Publication Number
14/2024
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

STERLITE TECHNOLOGIES LIMITED
STERLITE TECHNOLOGIES LIMITED, IFFCO Tower, 3rd Floor, Plot No.3, Sector 29, Gurgaon 122002, Haryana, India

Inventors

1. Savnish Singhi
IFFCO Tower, 3rd Floor, Plot No.3, Sector 29 Gurgaon, Haryana - 122002
2. Narendra Gadgil
IFFCO Tower, 3rd Floor, Plot No.3, Sector 29 Gurgaon, Haryana - 122002
3. Nitesh Kumar
Capital Cyberscape, 16th Floor, Sector 59, Gurugram, Haryana – 122102
4. Nishayvithaa Subramanian
IFFCO Tower, 3rd Floor, Plot No.3, Sector 29 Gurgaon,

Specification

Description:TECHNICAL FIELD
The present disclosure relates to wireless communication and networks and more specifically relates to methods and a network entity for fronthaul bandwidth management in an Open Radio Access Network (O-RAN).

BACKGROUND
A conventional radio access network (RAN) is built using an integrated unit, which carries out the functions of the entire RAN. An application-specific hardware is used in the RAN, however, the application specific hardware makes it difficult to upgrade. With the help of network slicing, the conventional RAN constantly evolves to support its massive densification, which is intended to support increased bandwidth requirements. There is a growing need to reduce capital and operating costs related to the deployment of the RAN and obtain scalable and easy to upgrade solutions.
The aforesaid requirements have led to a preferential shift towards Open Radio Access Networks (O-RAN). The O-RAN is a term used for industry-wide standards for RAN interfaces that support interoperation between vendors’ equipment and offer network flexibility at a lower cost. The main purpose of the O-RAN is to have an interoperability standard for RAN elements including non-proprietary white box hardware and software from different vendors.
The O-RAN has identified three distinct functions/elements: an Open Central Unit (O-CU), an Open Distributed Unit (O-DU) and an Open Radio Unit (O-RU). In the O-RAN, the O-DU is a logical node hosting a Radio link control (RLC)/ Medium Access Control (MAC)/High-physical (High-PHY) layers and the O-RU is a logical node hosting Low-physical (Low-PHY) layer and radio frequency (RF) processing based on a lower layer functional split. The functional split is O-RAN 7.2x (100) as shown in FIG. 1. The functional splitting between the O-DU and the O-RU divides the function of the physical layer (Layer 1) named as High-PHY that resides in the O-DU and Low-PHY that resides in the O-RU. As shown in FIG. 1, the O-RAN proposes the splitting of the physical layer (PHY) into the high-PHY and the low-PHY. For option 7.2, the uplink (UL), Cyclic Prefix (CP) removal, fast Fourier transform (FFT), digital beamforming (if applicable), and prefiltering (for PRACH (Physical Random-Access Channel) only) occur in the O-RU. The rest of the PHY is processed in the O-DU. For downlink (DL), inverse FFT (iFFT), CP addition, and digital beamforming (if applicable) occur in the O-RU, and the rest of the PHY processing happens in the O-DU.
The connection between the O-RU and the O-DU can be direct or bridged (with Ethernet switches between the O-RU and O-DU) as shown in FIG. 3.
In the O-RAN, there is a possibility that an actual bandwidth supported by a fronthaul link may differ from a port capacity of the O-RU or the O-DU due to several reasons, for example, the presence of one or more switches between the O-RU and the O-DU, or mismatch between port capacities of the O-RU and the O-DU. The O-DU does not check the actual bandwidth which can be supported by a fronthaul link. Consequently, the O-DU might attempt to set a configuration based on its knowledge of the O-RU’s port bandwidth which might result in a fronthaul bandwidth exceeding the actual bandwidth which can be supported by the fronthaul link, thereby leading to packet loss. Such a situation would be highly undesirable as it impacts the performance of the fronthaul link.
The O-DU, not having information about the actual bandwidth which can be supported by the fronthaul link, might also attempt to set a configuration that might result in a fronthaul bandwidth that is much lesser than the actual bandwidth which can be supported, thereby leading to underutilization of the fronthaul link. Such a situation would be undesirable as well.
Some of the prior art references are given below for bandwidth management in the O-RAN:
US20200366542A1 discloses a distributed radio frequency communication system that includes a remote radio unit (RRU) and a baseband unit (BBU). The RRU includes an adaptive compression circuitry to adaptively compress FD samples to reduce bandwidth utilization.
US11159982B2 discloses a method for optimization of the fronthaul interface bandwidth for Radio Access Networks and Cloud Radio Access Networks. The fronthaul interface bandwidth is optimized by performing dynamic compression of the IQ data.
US10886976B2 discloses a cloud radio access network (CRAN) system that includes a baseband unit (BBU) and a radio unit (RU) remote from the BBU. The BBU dynamically sends a message to the RU with a field that defines the compression method and IQ bit width for the user data to improve the fronthaul efficiency.
A non-patent literature entitled “Transport Layer and O-RAN Fronthaul Protocol Implementation” discloses bandwidth saving by using real-time variable bitwidth compression.
Another non-patent literature entitled “ORAN-WG4.CUS.0-v01.00 Technical Specification” discloses the configuration of O-DU and O-RU with appropriate bitwidth using the M-plane messages.
While the prior arts disclose various techniques for bandwidth management in the O-RAN, none of the prior art references suggests monitoring the actual bandwidth, which can be supported by the fronthaul and modification of compression bitwidth to avoid overutilization and underutilization of the fronthaul based on the actual bandwidth.

OBJECT OF THE DISCLOSURE
A principal object of the present disclosure is to solve the aforesaid drawbacks and provide methods and a network entity for bandwidth management in an Open Radio Access Network (O-RAN).
Another objective of the present disclosure is to provide methods pertaining to the bandwidth of a fronthaul link between an O-DU and an O-RU of the O-RAN. The present disclosure proposes techniques having the capability to check an actual fronthaul link bandwidth in scenarios where a central controller (aka “RU controller”) may be unaware of the actual bandwidth the fronthaul link can operate with and configures the O-RU by monitoring a port bandwidth or capacity of the O-RU without monitoring the actual bandwidth, which can be supported by the fronthaul link. The proposed disclosure would check and raise an alarm or correspondingly modify compression bitwidth for any configuration setting that results in a bandwidth either greater or lesser than the one supported by the fronthaul link.
Another object of the present disclosure is to prevent fronthaul link overutilization and underutilization in the O-RAN.
Another object of the present disclosure is to allow operators to efficiently use the fronthaul link and keep a check on the undesirable cases of packet loss and avoid underutilization of the same in a different possible scenario without affecting the hardware of the O-RAN.

SUMMARY
Accordingly, the present disclosure provides a bandwidth management method in an Open Radio Access Network (O-RAN) by a network entity. The network entity can be, for example, but is not limited to an Open Distributed Unit (O-DU) and an Open Radio Unit (O-RU), where the O-DU and the O-RU are connected with each other through one of direct connection or a bridged connection. The bandwidth management method includes computing a fronthaul bandwidth of a fronthaul link and determining utilization of the fronthaul link based on the computed fronthaul bandwidth and the capacity of the network entity. Further, the bandwidth management method includes implementing synchronized bandwidth management in response to determining when the fronthaul link is one of underutilized and overutilized. The synchronized bandwidth management is implemented by selecting a compression bitwidth based on an actual bandwidth, wherein the actual bandwidth is a maximum bandwidth supported by the fronthaul link, and configuring the O-RU through a central controller by monitoring the capacity of the O-RU while monitoring the actual bandwidth supported by the fronthaul link, wherein the central controller comprises one of the O-DU (when the network entity is the O-RU), a Network Management System (NMS), an equipment management system (EMS), and a Service Management Orchestration (SMO). Alternatively, the synchronized bandwidth management is implemented by determining that the fronthaul bandwidth is greater or lesser than the actual bandwidth supported by the fronthaul link, and changing a configuration by selecting the compression bitwidth, which results in a change in the fronthaul bandwidth in response to the determination. Alternatively, the synchronized bandwidth management is implemented by performing one of: decreasing the compression bitwidth when the fronthaul bandwidth is greater than the actual bandwidth supported by the fronthaul link to avoid overutilization and increasing the compression bitwidth when the fronthaul bandwidth is less than the actual bandwidth supported by the fronthaul link to avoid underutilization of the fronthaul link.
Further, the bandwidth management includes performing one of: preventing the O-RU to enter a forced configuration setting by generating an alarm, so as to prevent overutilization of the fronthaul link or underutilization of the fronthaul link, and modifying the compression bitwidth for the forced configuration setting resulting in a greater bandwidth or a lesser bandwidth than the actual bandwidth supported by the fronthaul link.
These and other aspects herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the invention herein without departing from the spirit thereof.

BRIEF DESCRIPTION OF FIGURES
The invention is illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the drawings. The invention herein will be better understood from the following description with reference to the drawings, in which:
FIG. 1 illustrates functional splits of an O-RAN.
FIG. 2 illustrates an overview of the general architecture of the O-RAN.
FIG. 3 illustrates an example scenario in which a bridged connection between an O-DU and an O-RU in the O-RAN is depicted.
FIG. 4 illustrates various hardware elements of a network entity (i.e., O-DU or O-RU) in communication with a central controller.
FIG. 5 is a flow chart illustrating a bandwidth management method in the O-RAN.
FIG. 6 is a flow chart illustrating the bandwidth management method at the O-RU of the O-RAN, when network topology changes in the O-RAN.
FIG. 7 is a flow chart illustrating the bandwidth management method at the O-RU of the O-RAN, when a user initiates changes in the O-RAN.
FIG. 8 is a flow chart illustrating the bandwidth management method at the O-DU of the O-RAN, when the network topology changes in the O-RAN.
FIG. 9 is a flow chart illustrating the bandwidth management method at the O-DU of the O-RAN, when the user initiates changes in the O-RAN.
FIG. 10 is a flow chart illustrating the bandwidth management method at the O-RU of the O-RAN based on a timer.
FIG. 11 is a flow chart illustrating the bandwidth management method at the O-DU of the O-RAN based on the timer.

DETAILED DESCRIPTION
In the following detailed description of the invention, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be obvious to a person skilled in the art that the invention may be practiced with or without these specific details. In other instances, well known methods, procedures and components have not been described in details so as not to unnecessarily obscure aspects of the invention.
Furthermore, it will be clear that the invention is not limited to these alternatives only. Numerous modifications, changes, variations, substitutions and equivalents will be apparent to those skilled in the art, without parting from the scope of the invention.
The accompanying drawings are used to help easily understand various technical features and it should be understood that the alternatives presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
The deficiencies in previous techniques (as discussed in the background section) can be solved by the proposed disclosure that provides bandwidth management in an Open Radio Access Network (O-RAN).
Now referring to the figures, where FIG. 2 illustrates a general overview architecture of an O-RAN (Open-Radio Access Network) (200). The O-RAN (200) is a part of a telecommunications system which connects individual devices to other parts of a network through radio connections. The O-RAN (200) provides a connection of user equipment (UE) such as mobile phones or computers with a core network of the telecommunication systems. The O-RAN (200) is an essential part of the access layer in telecommunication systems which utilizes base stations (such as eNodeB, and gNodeB) for establishing radio connections. The O-RAN (200) is an evolved version of prior radio access networks, making the prior radio access networks more open and smarter than previous generations. The O-RAN (200) provides real-time analytics that drives embedded machine learning systems and artificial intelligence back-end modules to empower network intelligence. Further, the O-RAN (200) includes virtualized network elements with open and standardized interfaces. The open interfaces are essential to enable smaller vendors and operators to quickly introduce their own services or enable operators to customize the network to suit their own unique needs. Open interfaces also enable multivendor deployments, enabling a more competitive and vibrant supplier ecosystem. Similarly, open-source software and hardware reference designs enable faster, more democratic and permission-less innovation. Further, the O-RAN (200) introduces a self-driving network by utilizing new learning-based technologies to automate operational network functions. These learning-based technologies make the O-RAN intelligent. Embedded intelligence, applied at both component and network levels, enables dynamic local radio resource allocation, and optimizes network-wide efficiency. In combination with O-RAN’s open interfaces, AI-optimized closed-loop automation is a new era for network operations.
The O-RAN (200) may comprise a Service Management and Orchestrator (SMO) (can also be termed as “Service Management and Orchestration Framework”) (102), a Non-Real Time RAN Intelligent Controller (Non-RT-RIC) (104) (act as a master controller) residing in the SMO (102), a Near-Real Time RAN Intelligent Controller (Near-RT-RIC) (106), an Open Evolved NodeB (O-eNB) (108), an Open Central Unit Control Plane (O-CU-CP) (110), an Open Central Unit User Plane (O-CU-UP) (112), an Open Distributed Unit (O-DU) (114), an Open Radio Unit (O-RU) (116), and an Open Cloud (O-Cloud) (118).
The SMO (102) is configured to provide SMO functions/services such as data collection and provisioning services of the O-RAN (200). The data collection of the SMO (102) may include, for example, data related to a bandwidth of a wireless communication network and at least one of a plurality of user equipments (UEs) (not shown in figures). That is, the SMO (102) oversees all the orchestration aspects, management and automation of O-RAN elements and resources and supports O1, A1 and O2 interfaces. Further, when the SMO (102) receives an order to integrate a new cell into the O-RAN (200), the SMO (102) orders the O-Cloud (118) to allocate resources present in cloud infrastructure to an access node (like the O-DU (114)) for configuring and initializing the new cell.
As the O-RAN (200) continues to evolve towards more open architecture, opportunities for innovation in the O-RAN space have proliferated. One of the central components within the O-RAN (200) is a RAN Intelligent Controller (RIC), which provides an open hosting platform and is responsible for controlling and optimizing the RAN functions. The RIC incorporates AI/ML into its decision-making functionalities and comes in two forms: the Non-RT-RIC (104) and the Near-RT-RIC (106), which can be adapted to specific latency or control loop requirements.
The Non-RT-RIC (104) is a logical function that enables non-real-time control and optimization of the O-RAN elements and resources, AI/ML workflow including model training and updates, and policy-based guidance of applications/features in the Near-RT-RIC (106). The Non-RT-RIC (104) is a part of the SMO Framework (102) and communicates to the Near-RT-RIC (106) using the A1 interface. The Near-RT-RIC (106) is a logical function that enables near-real-time control and optimization of the O-RAN elements and resources via fine-grained data collection and actions over an E2 interface.
Non-Real Time (Non-RT) control functionality (>1s) and Near-Real Time (Near-RT) control functions (<1s) are decoupled in the RIC. The Non-RT functions include service and policy management, RAN analytics and model-training for some of the near-RT-RIC functionality, and non-RT-RIC optimization.
The O-eNB (108) is a hardware aspect of a fourth generation RAN that communicates with at least one of the plurality of UEs via wireless communication networks such as a mobile phone network. The O-eNB (108) is a base station and may also be referred to as e.g., evolved Node B (“eNB”), “eNodeB”, “NodeB”, “B node”, gNB, or BTS (Base Transceiver Station), depending on the technology and terminology used. The O-eNB (108) is a logical node that handles the transmission and reception of signals associated with a plurality of cells (not shown in figures). The O-eNB (108) supports O1 and E2-en interfaces to communicate with the SMO (102) and the Near-RT-RIC (106) respectively.
Further, the O-CU (Open Central Unit) is a logical node hosting RRC (Radio Resource Control), SDAP (Service Data Adaptation Protocol) and PDCP (Packet Data Convergence Protocol). The O-CU is a disaggregated O-CU and includes two sub-components: O-CU-CP (110) and O-CU-UP (112). The O-CU-CP (110) is a logical node hosting the RRC and the control plane part of the PDCP. The O-CU-CP (110) supports O1, E2-cp, F1-c, E1, X2-c, Xn-c and NG-c interfaces for interaction with other components/entities.
Similarly, the O-CU-UP (112) is a logical node hosting the user plane part of the PDCP and the SDAP and uses O1, E1, E2-up, F1-u, X2-u, NG-u and Xn-u interfaces.
The O-DU (114) is a logical node hosting RLC/MAC (Medium access control)/High-PHY layers based on a lower layer functional split and supports O1, E2-du, F1-c, F1-u, OFH CUS–Plane and OFH M-Plane interfaces.
The O-RU (116) is a logical node hosting the Low-PHY layer and RF (Radio Frequency) processing based on a lower layer functional split. This is similar to 3GPP’s “TRP (Transmission and Reception Point)” or “RRH (Remote Radio Head)” but more specific in including the Low-PHY layer (FFT/iFFT, PRACH (Physical Random Access Channel) extraction). The O-RU (116) utilizes OFH CUS–Plane and OFH M-Plane interfaces.
The O-Cloud (118) is a collection of physical RAN nodes (that host various RICs, CUs, and DUs), software components (such as operating systems and runtime environments) and the SMO (102), where the SMO (102) manages and orchestrates the O-Cloud (118) from within via the O2 interface. The O-Cloud (118) refers to a collection of O-Cloud resource pools at one or more locations and the software to manage nodes and deployments hosted on them. The O-Cloud (118) will include functionality to support both deployment-plane and management services.
Now referring to the various interfaces used in the O-RAN (200) as mentioned above.
The O1 interface is the element operations and management interface between management entities in the SMO (102) and O-RAN managed elements, for operation and management, by which FCAPS (fault, configuration, accounting, performance, security) management, Software management, File management shall be achieved. The O-RAN managed elements include the Near RT-RIC (106), the O-CU (the O-CU-CP (110) and the O-CU-UP (112)), the O-DU (114), the O-RU (116) and the O-eNB (108). The management and orchestration functions are received by the aforesaid O-RAN managed elements via the O1 interface. The SMO (102) in turn receives data from the O-RAN managed elements via the O1 interface for AI model training.
The O2 interface is a cloud management interface, where the SMO (102) communicates with the O-Cloud (118) it resides in. Typically, operators that are connected to the O-Cloud (118) can then operate and maintain the O-RAN (200) with the O1 or O2 interfaces.
The A1 interface enables the communication between the Non-RT-RIC (104) and the Near-RT-RIC (106) and supports policy management, machine learning, and enrichment information transfer to assist and train AI and machine learning in the Near-RT-RIC (106).
The E1 interface connects the two disaggregated O-CUs i.e., the O-CU-CP (110) and the O-CU-UP (112) and transfers configuration data (to ensure interoperability) and capacity information between the O-CU-CP (110) and the O-CU-UP (112). The capacity information is sent from the O-CU-UP (112) to the O-CU-CP (110) and includes the status of the O-CU-UP (112).
The Near-RT-RIC (106) connects to the O-CU-CP (110), the O-CU-UP (112), the O-DU (114) and the O-eNB (108) (combinedly called as an E2 node) with the E2 interface (i.e., E2-cp, E2-up, E2-du, and E2-en respectively) for data collection. The E2 node can connect only to one Near-RT-RIC, but one Near-RT-RIC can connect to multiple E2 nodes. Typically, protocols that go over the E2 interface are control plane protocols that control and optimize the elements of the E2 node and the resources they use.
The F1-c and F1-u interfaces (combinedly an F1 interface) connect the O-CU-CP (110) and the O-CU-UP (112) to the O-DU (114) to exchange data about frequency resource sharing and network statuses. One O-CU can communicate with multiple O-DUs via F1 interfaces.
Open fronthaul interfaces i.e., the OFH CUS-Plane (Open Fronthaul Control, User, Synchronization Plane) and the OFH M-Plane (Open Fronthaul Management Plane) connect the O-DU (114) and the O-RU (116). The OFH CUS-Plane is multi-functional, where the control and user feature transfer control signals and user data respectively and the synchronization feature synchronizes activities between multiple RAN devices. The OFH M-Plane optionally connects the O-RU (116) to the SMO (102). The O-DU (114) uses the OFH M-Plane to manage the O-RU (116), while the SMO (102) can provide FCAPS (fault, configuration, accounting, performance, security) services to the O-RU (116).
An X2 interface is broken into the X2-c interface and the X2-u interface. The former is for the control plane and the latter is for the user plane that sends information between compatible deployments, such as a 4G network’s eNBs or between an eNB and a 5G network’s en-gNB.
Similarly, an Xn interface is also broken into the Xn-c interface and the Xn-u interface to transfer control and user plane information respectively between next generation NodeBs (gNBs) or between ng-eNBs or between the two different deployments.
The NG-c (control plane interface) and the NG-u (user plane interface) connect the O-CU-CP (110) and the O-CU-UP (112) respectively to a 5G core. The control plane information is transmitted to a 5G access and mobility management function (AMF) that receives connection and session information from the user equipment and the user plane information is relayed to a 5G user plane function (UPF), which handles tunnelling, routing and forwarding, for example.
FIG. 3 illustrates an example scenario (300) in which a bridged connection between the O-DU (114) and the O-RU (116) in the O-RAN (200) is depicted. The bridged connection is formed between the O-DU (114) and an O-RU (116) in the O-RAN (200) using one or more Ethernet switches (302). One or more Ethernet switches (302) are a type of network hardware that is foundational to networking and the Internet. The one or more Ethernet switches (302) connect cabled devices, like computers, Wireless Fidelity (Wi-Fi) access points, Power over Ethernet (PoE) lighting and Internet of Things (IoT) devices, and servers, in an Ethernet LAN so they can communicate with each other and to the Internet.
FIG. 4 illustrates various hardware elements of a network entity (400). The network entity (400) can be, for example, but is not limited to the O-DU (114) and the O-RU (116), where the O-DU (114) and the O-RU (116) are connected with each other through one of direct connection and the bridged connection (as shown in FIG. 3). The network entity (400) communicates with a central controller (410). The central controller (410) can be, for example, but is not limited to the O-DU (114) (when the network entity (400) is the O-RU (116)), a Network Management System (NMS), an Equipment Management System (EMS), and the SMO (102). The central controller (410) configures a radio unit based on the capacity of the radio unit.
The network entity (400) may include a processor (402), a memory (404), a bandwidth management controller (406) and a communicator (408). The processor (402) is configured to execute instructions stored in the memory (404) and to perform various processes related to the present disclosure. The communicator (408) is configured for communicating internally between internal hardware components and with external devices via one or more networks. The memory (404) is configured to store instructions to be executed by the processor (402).
The bandwidth management controller (406) computes a fronthaul bandwidth of a fronthaul link. The amount of traffic supported on the fronthaul link is defined as the fronthaul bandwidth. When bandwidth utilized between the O-DU (114) and the O-RU (116) is greater/lesser than an actual bandwidth supported by the fronthaul link, the condition of overutilization and underutilization occurs on the fronthaul link. The fronthaul link enables the communication between the O-DU (114) and the O-RU (116) through a fronthaul interface.
The fronthaul bandwidth corresponds to a configuration and depends on a compression bitwidth, wherein a bandwidth of an open fronthaul interface for the configuration is computed (calculated) as ORAN 7.2 Fronthaul bandwidth = ( No.of Carrier Component*No.of layers*PRB* (12 SubCarrier*(2*CompWidth)+Exponent)*14 symbols per TTI*No.of Slot per frame*100)/(1000*1000)+ DL Split 7.2 Control Overhead+Synch Plane Overheads........equation (1)
wherein PRB: Number of Physical Resource Blocks for a given bandwidth, CompWidth: Compression bitwidth selected for Block compression methods, and Exponent: Power of two in the representation of floating-point numbers.
Using the above fronthaul bandwidth formula (equation 1), the fronthaul bandwidth corresponding to a certain configuration can be determined. As per the requirement, the configuration can be changed by selecting the compression bitwidth, which results in a change in the fronthaul bandwidth. There is a requirement to check the actual bandwidth, which can be supported by the fronthaul link before applying or setting the compression bitwidth. Typically, the compression bitwidth is used to reduce the required fronthaul capacity. If the compression bitwidth is not properly selected, then it can result in a scenario where the required bandwidth exceeds the actual fronthaul bandwidth, which eventually will cause packet drops.
In other words, the configuration is changed by selecting the compression bitwidth, which results in the change in the fronthaul bandwidth based on a requirement, wherein the requirement determines the actual bandwidth which is supported by the fronthaul link before applying the compression bitwidth or setting the compression bitwidth.
The open fronthaul interface comprises a management plane (M-Plane) and a control user synchronization plane (CUS-Plane), wherein the M-Plane provides a variety of O-RU management functions to set parameters on the O-RU (116), and the CUS-Plane refers to real-time control between the O-DU (114) and the O-RU (116), traffic synchronization between the O-DU (114) and the O-RU (116) and provides controlling on a transferred In-phase and Quadrature (IQ) sample data. In an example, the IQ sample data can be sent in uncompressed format (16-bit IQ samples) or compressed format using Block compression methods with IQ sample bitwidth ranging from 9 to 14. The block compression methods are performed on a Physical Resource Block (PRB) basis (i.e., 12 x Resource Elements/subcarriers per PRB). The desired IQ bitwidth for compression can be configured through the M-plane or the C-Plane if it is not static. In other words, the block compression methods (i.e., BFP (Block Floating Point), block scaling, and µ-law compressions) are lossy techniques that are performed on a physical resource block (PRB) basis and applied to IQ samples of control-plane and user-plane messages. In general, the PRBs are the resource blocks, which are used for actual transmission/reception.
Further, the bandwidth management controller (406) determines utilization of the fronthaul link based on the computed fronthaul bandwidth and the capacity of the network entity (400). The computed fronthaul bandwidth of the fronthaul link is determined by validating a configuration of the fronthaul bandwidth by the O-DU (114) to be set based on the actual bandwidth of the fronthaul link, determining whether the fronthaul bandwidth exceeds the actual bandwidth supported by the fronthaul link or the fronthaul bandwidth lesser than actual bandwidth supported by the fronthaul link, validating the configuration to be set in response to the determination, and tunning the compression bitwidth to deliver an exact margin required as per an operator’s requirement upon validation.
The bandwidth management controller (406) implements synchronized bandwidth management in response to determining when the fronthaul link is one of an underutilized and overutilized. The fronthaul bandwidth can be continuously monitored to identify whether the fronthaul link is either underutilized or overutilized. The O-RU (116) is configured to send data based on the real-time monitoring of the fronthaul bandwidth. The synchronized bandwidth management is implemented by comparing the fronthaul bandwidth with the actual bandwidth supported by the fronthaul link, determining that the fronthaul bandwidth is greater or lesser than the actual bandwidth supported by the fronthaul link, and changing the configuration by selecting the compression bitwidth, which results in a change in the fronthaul bandwidth.
The synchronized bandwidth management is implemented by decreasing the compression bitwidth when the fronthaul bandwidth is greater than the actual bandwidth supported by the fronthaul link to avoid overutilization. Alternatively, the synchronized bandwidth management is implemented by increasing the compression bitwidth, when the fronthaul bandwidth is less than the actual bandwidth supported by the fronthaul link to avoid underutilization of the fronthaul link. Alternatively, the synchronized bandwidth management is implemented by selecting the compression bitwidth based on the actual bandwidth supported by the fronthaul link and configuring the O-RU (116) through the central controller (410) by monitoring the capacity of the O-RU (116) while monitoring the actual bandwidth supported by the fronthaul link.
The bandwidth management controller (406) receives the computed fronthaul bandwidth through a system call. In an example, ethtool command available in Linux Operating System is used to query and control network device driver and hardware settings, particularly for wired Ethernet devices. A new alarm called as FH bandwidth alarm will cover all the scenarios of this implementation if carried out on the O-RU.
The bandwidth management controller (406) prevents the O-RU (116) to enter a forced configuration setting by generating an alarm, wherein the forced configuration setting corresponds to antenna models, antenna heights, azimuth, and tilt angles, so as to prevent overutilization of the fronthaul link or underutilization of the fronthaul link. The bandwidth management controller (406) prevents the O-RU (116) to enter the forced configuration setting by generating the alarm when the bandwidth exceeds the actual bandwidth supported by the fronthaul link or the bandwidth is less than the actual bandwidth supported by the fronthaul link. The bandwidth management controller (406) modifies (by decreasing or increasing) the compression bitwidth for the configuration setting resulting in a greater bandwidth or a lesser bandwidth than the actual bandwidth supported by the fronthaul link.
FIG. 5 is a flow chart (500) illustrating a bandwidth management method in the O-RAN (200). The operations (502-506) are performed by the bandwidth management controller (406).
At step 502, the method includes computing the fronthaul bandwidth of the fronthaul link. At step 504, the method includes determining the utilization of the fronthaul link based on the computed fronthaul bandwidth and the capacity of one of the O-DU (114) and the O-RU (116). At step 506, the method includes implementing the synchronized bandwidth management in response to determining when the fronthaul link is one of underutilized and overutilized.
The proposed method can be used to determine the actual bandwidth supported by the fronthaul link so that it can be consulted before configuring the O-RU (116)/ applying compression bitwidth in the O-RU (116). The method is open for implementation either at the O-RU (116) or at the O-DU (114).
Implementation at the O-RU (116) will be a logic, which raises an alarm on entering any configuration which would result in a fronthaul bandwidth exceeding the actual bandwidth supported by the fronthaul link or a fronthaul bandwidth lesser than the actual bandwidth supported by the fronthaul link. The O-RU (116) can get the actual bandwidth supported by the fronthaul link through the system call.
Alternatively, implementation at the O-DU (114) will result in creating the capability of the O-DU (114) to check the configuration and conclude if it is a possible configuration based on the actual fronthaul bandwidth at that instance (i.e., decrease compression bitwidth in order to avoid overutilization and increase compression bitwidth in order to avoid underutilization of the fronthaul link). The O-DU (114) will need to check the actual bandwidth of the fronthaul link for validating the configuration to be set. This value can be checked by using a leaf in a data model of o-ran-transceiver.yang.
Advantageously, the proposed method allows operators to efficiently use the fronthaul link, to keep a check on the undesirable cases of packet loss and avoid underutilization of the same in a different possible scenario without affecting the hardware of the O-RAN (200) and with only a slight modification in techniques, reaping the benefits of the O-RAN (200).
The proposed method is applicable for LLS-C2 (Lower Layer Split-C2) and LLS-C3 (Lower Layer Split-C3) (i.e., one or more Ethernet switches (302) are allowed between the O-DU (114) and the O-RU (116)) as shown in FIG. 3.
FIG. 6 is a flow chart (600) illustrating the bandwidth management method at the O-RU (116) of the O-RAN (200), when a network topology changes in the O-RAN (200).
At step 602, the method includes obtaining the initial configuration of the O-RU (116). At step 604, the method includes determining whether an auto-neg (automatic negotiation) has taken place or not. If the auto-neg has taken place, then at step 606, the method includes updating a port speed due to the network topology change. If no auto-neg has taken place, then the method keeps monitoring until the auto-neg has taken place. The auto-neg means a link rate is automatically negotiated in the fronthaul because of a hardware or network change.
At step 608, the method includes applying the fronthaul bandwidth formula (equation 1). At step 610, the method includes determining whether B >= X% of A or not, where A is an actual bandwidth supported by the fronthaul link, B is a fronthaul bandwidth computed for the compression bitwidth as per configuration selected, X% of A will be considered as an upper threshold (boundary) for the computed fronthaul bandwidth (B) and X is a numerical value decided by an operator based on the requirement. If B >= X% of A, then at step 612, the method includes generating an alarm to notify overutilization of the fronthaul link. If B <= X% of A then, at step 614, the method includes determining if B < Y% of A and compression bitwidth can be increased, where Y% of A will be considered as a lower threshold (boundary) for the computed fronthaul bandwidth (B) and Y is a numerical value decided by an operator based on the requirement. If B < Y% of A and the compression bitwidth is increased, then at step 616, the method includes generating the alarm to notify underutilization of the fronthaul link. If the computed fronthaul bandwidth (B) is between the upper threshold and lower threshold and the compression bitwidth is increased or decreased, then at step 618, the method waits until a change in the network topology is detected, which is forwarded to step 604.
FIG. 7 is a flow chart (700) illustrating the bandwidth management method at the O-RU (116) of the O-RAN (200), when a user initiate changes in the O-RAN (200).
At step 702, the method includes obtaining the initial configuration of the O-RU (116). At step 704, the method includes determining whether the compression bitwidth is updated or not. If the compression bitwidth is updated, then at step 706, the method includes applying the fronthaul bandwidth formula (equation 1). If the compression bitwidth has not been updated, then the method keeps monitoring until the compression bitwidth has been updated. At step 708, the method includes determining whether B >= X% of A or not, where A is the actual bandwidth supported by the fronthaul link, B is the fronthaul bandwidth computed for the compression bitwidth as per configuration selected, and X is a numerical value decided by the operator based on the requirement. If B >= X% of A, then at step 710, the method includes generating the alarm to notify overutilization of the fronthaul link. If B <= X% of A, then at step 712, the method includes determining if B < Y% of A and the compression bitwidth can be increased, where Y is a numerical value decided by the operator based on the requirement. If B < Y% of A and the compression bitwidth is increased, then at step 714, the method includes generating the alarm to notify underutilization of the fronthaul link. If B > Y% of A and the compression bitwidth is increased or decreased, then the flow chart (700) proceeds to step 704.
FIG. 8 is a flow chart (800) illustrating the bandwidth management method at the O-DU (114) of the O-RAN (200), when the network topology changes in the O-RAN (200).
At step 802, the method includes obtaining the initial configuration of the O-RU (116). At step 804, the method includes determining whether the auto-neg has taken place or not. If the auto-neg has taken place, then at step 806, the method includes updating the port speed due to the network topology change. If no auto-neg has taken place, then the method keeps monitoring until the auto-neg has taken place. At step 808, the O-DU (114) obtains the updated bandwidth. At step 810, the method includes applying the fronthaul bandwidth formula (equation 1). At step 812, the method includes determining whether B >= X% of A or not, where A is the actual bandwidth supported by the fronthaul link, B is the fronthaul bandwidth computed for the compression bitwidth as per configuration selected, and X is a numerical value decided by the operator based on the requirement. If B >= X% of A, then at step 814, the method includes decreasing the compression bitwidth. Accordingly, at step 816, the O-RU is configured with the decreased compression bitwidth.
If B <= X% of A, then at step 818, the method includes determining if B < Y% of A and the compression bitwidth can be increased, where Y is a numerical decision by the operator based on the requirement. If B < Y% of A and the compression bitwidth is increased or decreased, then at step 820, the method includes increasing the compression bitwidth. At step 822, the O-RU (116) is configured with the increased compression bitwidth. If B > Y% of A and the compression bitwidth is increased or decreased, then at step 824, the method includes detecting a change in the network topology, and the same is transmitted to step 804.
FIG. 9 is a flow chart (900) illustrating the bandwidth management method at the O-DU (114) of the O-RAN (200), when a user initiate changes in the O-RAN (200).
At step 902, the method includes obtaining the initial configuration of the O-RU (116). At step 904, the method includes determining whether the compression bitwidth is updated or not. If the compression bitwidth is updated, then at step 906, the O-DU (114) obtains the updated bandwidth. At step 908, the method includes applying the fronthaul bandwidth formula (equation 1). At step 910, the method includes determining whether B >= X% of A, where A is the actual bandwidth supported by the fronthaul link, B is the fronthaul bandwidth computed for the compression bitwidth as per configuration selected, and X is a numerical value decided by the operator based on the requirement. If B >= X% of A, then at step 912, the method includes decreasing the compression bitwidth. Accordingly, at step 914, the O-RU (116) is configured with the decreased compression bitwidth. If B <= X% of A, then at step 916, the method includes determining if B < Y% of A and the compression bitwidth can be increased, where Y is a numerical value decided by the operator based on the requirement. If B < Y% of A and the compression bitwidth can be increased, then at step 918, the method includes increasing the compression bitwidth. Accordingly, at step 920, the O-RU (116) is configured with the increased compression bitwidth. If B > Y% of A and the compression bitwidth is increased or decreased, then the flow chart (900) proceeds to step 904.
FIG. 10 is a flow chart (1000) illustrating the bandwidth management method at the O-RU (116) of the O-RAN (200) based on a timer.
At step 1002, the method includes starting a first timer and the first counter in a first loop. At step 1004, the method includes applying the fronthaul bandwidth formula (equation 1). At step 1006, the method includes determining whether B >= X% of A or not, where A is the actual bandwidth supported by the fronthaul link, B is the fronthaul bandwidth computed for the compression bitwidth as per configuration selected, and X is a numerical value decided by the operator based on the requirement. If B >= X% of A, then at step 1008, the method includes determining whether the first counter < Z or not, where Z is a numerical value decided by the operator based on the requirement. If the first counter < Z, then the flow chart (1000) proceeds to step 1004. If the first counter > Z, then at step 1010, the method includes generating the alarm to notify overutilization of the fronthaul link.
If B <= X% of A, then at step 1012, the method includes exiting the first loop and starting a second timer and a second counter in a second loop. At step 1014, the method includes applying the fronthaul bandwidth formula (equation 1). At step 1016, the method includes determining if B < Y% of A and the compression bitwidth can be increased, where Y is a numerical value decided by the operator based on the requirement. If B < Y% of A and the compression bitwidth is increased, then at step 1018, the method includes determining whether the second counter < Z or not, where Z is a numerical value decided by the operator based on the requirement. If the second counter < Z, then the flow chart (1000) proceeds to step 1014. If the second counter > Z, then at step 1020, the method includes generating the alarm to notify underutilization of the fronthaul link. If B > Y% of A and the compression bitwidth is increased or decreased, then at step 1022, the method includes exiting the second loop.
FIG. 11 is a flow chart (1100) illustrating the bandwidth management method at the O-DU (114) of the O-RAN (200) based on the timer.
At step 1102, the method includes starting the first timer and the first counter in the first loop. At step 1104, the method includes applying the fronthaul bandwidth formula (equation 1). At step 1106, the method includes determining whether B >= X% of A or not, where A is the actual bandwidth supported by the fronthaul link, B is the fronthaul bandwidth computed for the compression bitwidth as per configuration selected, and X is a numerical value decided by the operator based on the requirement. If B >= X% of A, then at step 1108, the method includes determining whether the first counter < Z or not, where Z is a numerical value decided by the operator based on the requirement. If the first counter < Z, then the flow chart (1100) proceeds to step 1104. If the first counter > Z, then at step 1110, the method includes decreasing compression bitwidth.
If B <= X% of A, then at step 1112, the method includes exiting the first loop and starting the second timer and the second counter in the second loop. At step 1114, the method includes applying the formula (equation 1). At step 1116, the method includes determining if B < Y% of A and compression bitwidth can be increased. If B < Y% of A and compression bitwidth can be increased, then at step 1118, the method includes determining whether the second counter < Z or not, where Z is a numerical value decided by the operator based on the requirement. If the second counter < Z, then at step 1114, the method includes applying the fronthaul bandwidth formula (equation 1). If the second counter > Z, then at step 1120, the method includes increasing the compression bitwidth. If B > Y% of A and compression bitwidth is increased or decreased, then at step 1122, the method includes exiting the second loop.
Bandwidth exceeding actual Fronthaul bandwidth:
A possible scenario where due to a hardware or network change (for e.g., auto-neg), the running fronthaul bandwidth may change. The RU controller (O-DU/NMS/EMS/SMO), unaware of this change and not monitoring the actual bandwidth supported by the fronthaul, will try to set a forced configuration which uses a compression bitwidth resulting in a bandwidth exceeding the actual running fronthaul bandwidth which will cause packet drops. In this scenario, a flag would be required corresponding to the configuration failed state, which thereby would generate an alarm to intimate the RU controller (ODU/NMS/EMS/SMO) about the same.
For example, if X=90, then 90% of the actual bandwidth (A) supported by the fronthaul link will be considered as the upper threshold and when the computed fronthaul bandwidth (B) crosses above the upper threshold, the alarm corresponding to overutilization of the fronthaul link is generated. The numerical value of the X in the above example is for illustration purposes. The exact margin required can be defined as per the operator’s requirement.
Bandwidth lesser than actual Fronthaul bandwidth: Another scenario is possible where the RU controller (O-DU/NMS/EMS/SMO), unaware of the actual bandwidth supported by the fronthaul link, will try to set a configuration that uses a smaller compression bitwidth resulting in a bandwidth much lesser than the actual possible bandwidth supported by the fronthaul link. For example, if Y=70, then 70% of the actual bandwidth (A) supported by the fronthaul link will be considered as the lower threshold and when the computed fronthaul bandwidth (B) goes below the lower threshold, the alarm is generated to intimate the RU controller (O-DU/NMS/EMS/SMO) that the fronthaul link is underutilized. The numerical value of the Y in the above example is for illustration purposes. The exact margin required can be defined as per the operator’s requirement.
The proposed method can be used to check and raise the alarm or to correspondingly modify the compression bitwidth for any configuration setting with the compression bitwidth resulting in a bandwidth either greater or lesser than the one supported by the fronthaul link. By doing this, the overutilization and/or underutilization scenario of the fronthaul link can be avoided and the fronthaul link is efficiently utilized. The proposed method allows operators to efficiently use the fronthaul link and keeps a check on undesirable cases of packet loss.
The various actions, acts, blocks, steps, or the like in the flow chart (500-1200) may be performed in the order presented, in a different order or simultaneously. Further, in some implementations, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
The embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
It will be apparent to those skilled in the art that other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention. While the foregoing written description of the invention enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The invention should therefore not be limited by the above-described embodiment, method, and examples, but by all embodiments and methods within the scope of the invention. It is intended that the specification and examples be considered as exemplary, with the true scope of the invention being indicated by the claims.
The methods and processes described herein may have fewer or additional steps or states and the steps or states may be performed in a different order. Not all steps or states need to be reached. The methods and processes described herein may be embodied in, and fully or partially automated via, software code modules executed by one or more general purpose computers. The code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all of the methods may alternatively be embodied in whole or in part in specialized computer hardware.
The results of the disclosed methods may be stored in any type of computer data repository, such as relational databases and flat file systems that use volatile and/or non-volatile memory (e.g., magnetic disk storage, optical storage, EEPROM and/or solid state RAM).
The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
Moreover, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor device, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general-purpose processor device can be a microprocessor, but in the alternative, the processor device can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor device can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor device includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor device can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor device may also include primarily analog components. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor device, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An exemplary storage medium can be coupled to the processor device such that the processor device can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor device. The processor device and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor device and the storage medium can reside as discrete components in a user terminal.
Conditional language used herein, such as, among others, "can," "may," "might," "may," “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain alternatives include, while other alternatives do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more alternatives or that one or more alternatives necessarily include logic for deciding, with or without other input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular alternative. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain alternatives require at least one of X, at least one of Y, or at least one of Z to each be present.
While the detailed description has shown, described, and pointed out novel features as applied to various alternatives, it can be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the scope of the disclosure. As can be recognized, certain alternatives described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others.
, Claims:CLAIMS

We Claim:
A bandwidth management method in an Open Radio Access Network (O-RAN) (200), wherein the bandwidth management method comprises:
computing, by a network entity (400), a fronthaul bandwidth of a fronthaul link;
determining, by the network entity (400), utilization of the fronthaul link based on the computed fronthaul bandwidth and a capacity of the network entity (400); and
implementing, by the network entity (400), synchronized bandwidth management in response to determining when the fronthaul link is one underutilized and overutilized.

The bandwidth management method as claimed in claim 1, wherein implementing the synchronized bandwidth management comprises:
selecting a compression bitwidth based on an actual bandwidth, wherein the actual bandwidth is a maximum bandwidth supported by the fronthaul link; and
configuring an Open Radio Unit (O-RU) (116) through a central controller (410) by monitoring the capacity of the O-RU (116) while monitoring the actual bandwidth supported by the fronthaul link, wherein the central controller (410) comprises one of an Open Distributed Unit (O-DU) (114), a Network Management System (NMS), an Equipment Management System (EMS), and a Service Management Orchestration (SMO) module (102).

The bandwidth management method as claimed in claim 2, wherein the bandwidth management method comprises receiving, at the O-RU (116), the computed fronthaul bandwidth through a system call.

The bandwidth management method as claimed in claim 1, wherein the fronthaul bandwidth corresponds to a configuration and depends on a compression bitwidth, wherein a bandwidth of an open fronthaul interface for the configuration is computed as ORAN 7.2 Fronthaul bandwidth = ( No.of Carrier Component*No.of layers*PRB* (12 SubCarrier*(2*CompWidth)+Exponent)*14 symbols per TTI*No.of Slot per frame*200)/(2000*2000)+ DL Split 7.2 Control Overhead+Synch Plane Overheads
wherein PRB: Number of Physical Resource Blocks for a given bandwidth, CompWidth: Compression bitwidth selected for Block compression methods, and Exponent: the power of two in the representation of floating-point numbers.

The bandwidth management method as claimed in claim 4, wherein the configuration is changed by selecting the compression bitwidth, which results in a change in the fronthaul bandwidth based on a requirement, wherein the requirement determines the actual bandwidth which is supported by the fronthaul link before applying the compression bitwidth or setting the compression bitwidth.

The bandwidth management method as claimed in claim 4, wherein the open fronthaul interface comprises a management plane (M-Plane) and a control user synchronization plane (CUS-Plane), wherein the M-Plane provides a variety of O-RU management functions to set parameters on the O-RU (116), and wherein the CUS-Plane refers to real-time control between the O-DU (114) and the O-RU (116), provides traffic synchronization between the O-DU (114) and the O-RU (116) and provides controlling on a transferred In-phase and Quadrature (IQ) sample data.

The bandwidth management method as claimed in claim 1, wherein implementing the synchronized bandwidth management comprises:
comparing, by the network entity (400), the fronthaul bandwidth with an actual bandwidth supported by the fronthaul link;
determining, by the network entity (400), that the fronthaul bandwidth is greater or less than the actual bandwidth supported by the fronthaul link; and
changing, by the network entity (400), a configuration by selecting a compression bitwidth, which results in a change in the fronthaul bandwidth.

The bandwidth management method as claimed in claim 1, wherein the bandwidth management method comprises:
performing, by the network entity (400), one of:
preventing the O-RU (116) to enter a forced configuration setting by generating an alarm, wherein the forced configuration setting results in a greater fronthaul bandwidth or a lesser fronthaul bandwidth than the actual bandwidth supported by the fronthaul link, and
modifying a compression bitwidth for the forced configuration setting so as to prevent one of the overutilization or underutilization of the fronthaul link.

The bandwidth management method as claimed in claim 8, wherein preventing the O-RU (116) comprising generating the alarm when the fronthaul bandwidth exceeds the actual bandwidth supported by the fronthaul link or the fronthaul bandwidth is less than the actual bandwidth supported by the fronthaul link.

The bandwidth management method as claimed in claim 1, wherein implementing the synchronized bandwidth management comprises:
performing, by the network entity (400), one of:
decreasing a compression bitwidth when the fronthaul bandwidth is greater than an actual bandwidth supported by the fronthaul link to avoid overutilization; and
increasing the compression bitwidth when the fronthaul bandwidth is less than the actual bandwidth supported by the fronthaul link to avoid underutilization of the fronthaul link.

The bandwidth management method as claimed in claim 1, wherein the O-RAN (200) comprises the network entity (400), wherein the network entity (400) comprises one of an Open Distributed Unit (O-DU) (114) and an Open Radio Unit (O-RU) (116), wherein the O-DU (114) and the O-RU (116) are connected with each other through one of direct connection and a bridged connection, wherein the bandwidth management method is implemented in the network entity (400).

The bandwidth management method as claimed in claim 1, wherein determining, by the network entity (400), the computed fronthaul bandwidth of the fronthaul link comprises:
validating a configuration of the fronthaul bandwidth, by an O-DU (114), to be set based on an actual bandwidth of the fronthaul link;
determining whether the fronthaul bandwidth exceeds the actual bandwidth supported by the fronthaul link or the fronthaul bandwidth is less than the actual bandwidth supported by the fronthaul link;
validating the configuration to be set in response to determination; and
tunning a compression bitwidth to meet the operator requirements.
A network entity (400) for bandwidth management in an Open Radio Access Network (O-RAN) (200), wherein the network entity (400) comprises:
a processor (402);
a memory (404); and
a bandwidth management controller (406), coupled with the processor (402) and the memory (404), configured to:
compute a fronthaul bandwidth of a fronthaul link;
determine the utilization of the fronthaul link based on the computed fronthaul bandwidth and the capacity of the network entity (400); and
implement synchronized bandwidth management in response to determining when the fronthaul link is one of an underutilized and overutilized.

The network entity (400) as claimed in claim 13, wherein network entity (400) comprises one of an Open Distributed Unit (O-DU) (114) and an Open Radio Unit (O-RU) (116), wherein the O-DU (114) and the O-RU (116) are connected with each other through one of direct connection and a bridged connection.

Documents

Application Documents

# Name Date
1 202211056254-STATEMENT OF UNDERTAKING (FORM 3) [30-09-2022(online)].pdf 2022-09-30
2 202211056254-POWER OF AUTHORITY [30-09-2022(online)].pdf 2022-09-30
3 202211056254-FORM 1 [30-09-2022(online)].pdf 2022-09-30
4 202211056254-DRAWINGS [30-09-2022(online)].pdf 2022-09-30
5 202211056254-DECLARATION OF INVENTORSHIP (FORM 5) [30-09-2022(online)].pdf 2022-09-30
6 202211056254-COMPLETE SPECIFICATION [30-09-2022(online)].pdf 2022-09-30