Abstract: ABSTRACT The present disclosure provides methods for configuring and operating a radio network sandbox (202) in a wireless network (200). The method includes generating the at least one synthetic network data projected from existing network data. The existing network data is received from a real network (112). The existing network data is received continuously. Further, the method includes temporarily switching a deployed radio network application (206) between the real network (112) and the radio network sandbox (202) for an in-service validation based on the at least one generated synthetic network data. The temporarily switching is performed based on an as-needed basis. FIG. 7
Description:FORM 2
The Patent Act 1970
(39 of 1970)
&
The Patent Rules, 2003
COMPLETE SPECIFICATION
(SEE SECTION 10 AND RULE 13)
TITLE OF THE INVENTION
“SYSTEMS AND METHODS FOR CONFIGURING AND OPERATING RADIO NETWORK SANDBOX”
APPLICANT:
Name : Sterlite Technologies Limited
Nationality : Indian
Address : 3rd Floor, Plot No. 3, IFFCO Tower,
Sector – 29, Gurugram, Haryana
122002
The following specification particularly describes the invention and the manner in which it is to be performed:
TECHNICAL FIELD
[0001] The present disclosure relates to wireless networks, and more specifically relates to the wireless networks and methods for configuring and operating a radio network sandbox in the wireless networks.
BACKGROUND
[0002] FIG. 1 illustrates closed loop automation by xApps (110) /rApps (106) in an Open Radio Access Network (O-RAN) architecture (100), according to prior art. The O-RAN architecture (100) includes a Service Management and Orchestration (SMO) (102), a Non-Real Time RAN Intelligent Controller (Non-RT-RIC) (104), the rApps (106), Near-Real Time RAN Intelligent Controller (Near-RT-RIC) (108), the xApps (110) and a real network (112). The real network (112) can be a Radio access network (RAN) (e.g., 5G network, 6G network or the like).
[0003] The SMO (102) is defined within the O-RAN architecture (100) for a RAN management, wherein the SMO (102) provides automation and orchestration of an O-RAN domain and is critical for driving automation within the operator network. The RAN is a type of network infrastructure used commonly for mobile networks that consist of radio base stations with large antennas. The RAN wirelessly connects a user equipment (e.g., mobile phone) to a core network. In the RAN, the radio unit (RU) processes digital radio signals and transmits, receives, and converts the signals for a RAN base station. When the RU receives signal information from the antennas, it communicates with the baseband unit (BBU) using a Common Public Radio Interface (CPRI). The BBU takes the signal information and processes it, so it can be forwarded to the core network. Data returns to the user via the reverse process. The Non-RT RIC (104) is an Orchestration and Automation function described by the O-RAN Alliance for non-real-time intelligent management of RAN functions. The Near-RT RIC (108) is an element of the open RAN architecture that controls and optimizes other RAN elements and resources. Its actions on other elements take between 10 milliseconds to one second.
[0004] In the present scenario, with the O-RAN as explained above, possibilities are opening up for Mobile Network Operators (MNOs) to create network automation capabilities on open platforms (e.g., Near-RT-RIC (108), Non-RT-RIC (104) or the like). The Non-RT-RIC (104) collects performance data, configuration data, and fault data from the real network (112). Based on the collected performance data, collected configuration data, and collected fault data from the real network, the Non-RT-RIC (104) creates a policy guidance to control the Near-RT-RIC (108) and the real network (112). Also, application vendors need to make sure that their application works in the way it is intended, on a RAN Intelligent Controller (RIC) as well as on an MNO’s (Mobile Network Operator) network. But it is too risky to try it directly on a live network and lab tests using dummy networks may not be adequate. Further, there is a need for a capability to test and validate applications in a setting as close as possible to real network scenarios. However, there is no specific procedures and systems mentioned to synthesize near-real life network data and simulate network behaviours as close to reality, with respect to the application under test.
[0005] Some of the prior art references are given below to synthesize near-real life network data and simulate network behaviours in the wireless networks:
[0006] US20229039B2 describes testing of a virtual network function by a virtual network tester. A test environment is generated on demand that can emulate a radio access network (RAN) to test different use case scenarios in the radio access network.
[0007] A non-patent literature entitled “European Telecommunications Standards Institute (ETSI) announces multi-access edge computing (MEC) sandbox for edge application developers” describes a sandbox that provides a user with a choice of scenarios combining different network technologies (e.g., fourth generation (4G), fifth generation (5G), Wireless Fidelity (Wi-Fi)) and terminal equipment, such as vehicles, pedestrians or connected objects. The sandbox is designed to allow application developers to experience and interact with an implementation of ETSI MEC application programming interface (APIs) and test out their applications.
[0008] Another non-patent literature entitled “mobile core network emulator” discloses a solution that is capable of simulating and testing several devices individually or in parallel. A technician can use this emulator solution to test a base station before deployment in the real network.
[0009] Another non-patent literature entitled “sandbox light for device testing” describes a sandbox platform which is deployed on premises and connected to management portal to test multiple devices in real time.
[0010] But, none of the prior art references discloses about a radio network sandbox allowing in-service use of the sandbox on as-needed basis. Also, none of the prior art references discloses about temporarily switching of radio network application to the radio network sandbox for an in-service validation. In light of the above-stated discussion, there is a need to overcome the above stated disadvantages.
OBJECT OF THE DISCLOSURE
[0011] A principal object of the present disclosure is to provide methods for configuring and operating a radio network sandbox (RNS) in a wireless network (e.g., fifth generation (5G) network, open radio access network (O-RAN) network, sixth generation (6G) network or the like).
[0012] Another object of the present disclosure is to provide the RNS obtaining real network key performance indicator (KPI) data (e.g., Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), Received Signal Strength Indicator (RSSI), Signal-to-Interference-Plus-Noise Ratio (SINR), Policy and Charging Control Physical Downlink (PCC PHY DL) Throughput, and a Packet Data Convergence Protocol (PDCP) DL Throughput or the like) and applying an Artificial intelligence (AI)/machine learning (ML) data driven model to generate synthetic network data projected from the existing network data.
[0013] Another object of the present disclosure is to learn a network model based on the AI/ML (or ML/AI) data driven model for network key performance indicators (KPIs) within a time window.
[0014] Another object of the present disclosure is to insert at least one anomaly in the synthetic network data for what-if scenarios.
[0015] Yet another object of the present disclosure is to enable iterative control loop processing to iteratively simulate to generate outputs to a radio network application (RNA) and a radio network application platform (RNAP) and allow the RNA to control the RNS based on an output from the simulation.
[0016] Yet another object of the present disclosure is to enable iterative refinement of the RNA even after the RNA is deployed in a real network.
SUMMARY
[0017] Accordingly, the present disclosure provides methods for configuring and operating a radio network sandbox (RNS) in a wireless network. The RNS generates at least one synthetic network data projected from existing network data, where the existing network data is received from a real network. The existing network data is continuously received from the real network. The at least one synthetic network data is generated by observing key performance indicator (KPI) time series information collected from the real network over a period of time. The RNS generates at least one synthetic KPI time series information based on the observed KPI time series information and a plurality of what-if scenarios. The RNS creates at least one anomaly based on integration with the at least one real time data stream due to a significant challenge of data gathering, data filtering and data processing in real-time, device malfunctioning and poor calibration, and inserts the at least one anomaly in the at least one generated synthetic KPI time series information. The plurality of what-if scenarios are powered by the at least one real time data stream. The at least one synthetic KPI time series information analyses a KPI curve and an abnormal behaviour of KPIs that imply at least one potential fault in a deployed Radio Network Application (RNA), where the at least one potential fault can comprise variations in KPIs such as increasing access latency or downlink/uplink queueing latencies, a network connectivity failure, and/or sharp decreases in accessibility or retainability metrics or delivered downlink/uplink throughput for one or more users in the network. Based on the at least one generated synthetic network data, the RNS temporarily switches the deployed RNA between the real network and the RNS for an in-service validation. The temporarily switching is performed based on an as-needed basis.
[0018] Further, the RNS creates a network model based on at least one KPI time series of real data received from the real network over the period of time, where the at least one received KPI time series is observed and synthesized for creating the network model, where the network model generates the at least one synthetic network data projected from the existing network data and provides temporarily switching of the deployed RNA between the real network and the RNS for the in-service validation based on the as-needed basis. Further, the RNS continuously refines the network model. The network model is refined by continuous guidance and performance feedback received from the deployed RNA and a Radio Network Application Platform (RNAP), wherein the network model is executed in the RNS. The learning of the network model is done based on a real network KPI data received continuously, where the learning of the network model is monitored over the period of time. The network model receives the real network KPI data as an input and generates a synthetic network data as an output. The network model exists within the RNS being created, continuously monitored, refined, deployed, and terminated based on the need for learning, tuning, and validating.
[0019] Further, the RNS allows the deployed RNA to control the RNS based on at least one of a simulated output for the network model and performance tuning and optimizing the network model for operating on at least one real data set. The RNS is cutover from the real network and the RNAP after the deployed RNA is validated within the RNAP. The deployed RNA supports the RNAP that monitors and controls the RNS through an observable/controllable (O/C) interface. The RNS inserts at least one anomaly in the at least one synthetic network data for finding at least one anomalous event in at least one real time data stream. The RNS predicts a future network state based on a learned network model for the real network that exists within the RNS.
[0020] These and other aspects herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the invention herein without departing from the spirit thereof.
BRIEF DESCRIPTION OF FIGURES
[0021] The invention is illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the drawings. The invention herein will be better understood from the following description with reference to the drawings, in which:
[0022] FIG. 1 illustrates closed loop automation by xApps/rApps in an open RAN architecture, according to prior art.
[0023] FIG. 2a illustrates a radio network sandbox (RNS) interacting with a radio network application (RNA) hosted in a radio network application platform (RNAP) in a wireless network.
[0024] FIG. 2b illustrates a cutover to a real network use case scenario from the radio network sandbox.
[0025] FIG. 2c illustrates an in-service use case of the radio network sandbox.
[0026] FIGS. 3a-3b are graphs representing synthetically generated PRB Utilization and Accessibility KPIs for a single coverage cell.
[0027] FIGS. 4a-4d are graphs representing synthetically generated PRB Utilization and Accessibility KPIs for a dual cell mode of operation with both coverage and capacity cells ON.
[0028] FIG. 5 illustrates various hardware elements of the radio network sandbox.
[0029] FIG. 6 illustrates an example environment in which a network simulator included in the radio network sandbox is used for artificial intelligence (AI)/machine learning (ML) applications.
[0030] FIG. 7 is a flow chart illustrating a method for configuring and operating the radio network sandbox in the wireless network.
[0031] FIG. 8 is a graph representing a simulation of KPI data coming from the real network in a continuous fashion.
[0032] FIG. 9 is a graph representing a KPI anomaly creation by the radio network sandbox and detection by an rApp running in a Non-RT-RIC.
DETAILED DESCRIPTION
[0033] In the following detailed description of the invention, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be obvious to a person skilled in the art that the invention may be practiced with or without these specific details. In other instances, well known methods, procedures and components have not been described in detail so as not to unnecessarily obscure aspects of the invention.
[0034] Furthermore, it will be clear that the invention is not limited to these alternatives only. Numerous modifications, changes, variations, substitutions and equivalents will be apparent to those skilled in the art, without parting from the scope of the invention.
[0035] The accompanying drawings are used to help easily understand various technical features and it should be understood that the alternatives presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
[0036] The deficiencies in previous techniques (as discussed in background section) can be solved by the proposed radio network sandbox that generates synthetic network data projected from existing network data, which is received from a real network, and where a radio network application deployed in the real network is temporarily switched to the radio network sandbox for an in-service validation on as-needed basis.
[0037] Now simultaneous reference is made to FIG. 2a through FIG. 2c, where FIG. 2a illustrates a radio network sandbox (RNS) (202) interacting with a radio network application (RNA) (206) hosted in a radio network application platform (RNAP) (204) in a wireless network (200); FIG. 2b illustrates a cutover to a real network use case scenario from the radio network sandbox (202); and FIG. 2c illustrates an in-service use case of the radio network sandbox (202).
[0038] Referring to FIG. 2a to FIG. 2c, the RNS (202) interacts with the RNA (206) (such as rApp(s) (106), xApp(s) (110) or the like) hosted in the radio network application platform (RNAP) (204). The RNS (202) allows the RNA (206) to select a Radio Network Abstraction Level (RNAL) within the RNS (202) that is suitable for the RNA (206). The network abstraction level can be chosen based on rApps’ (106)/xApps’ (110) input/output/optimization objective requirements (i.e., an input of the radio network application (206), an output of the radio network application (206), optimization objective requirements), observable metrics required for rApps (106)/xApps (110), a support for controllable signals required from rApps (106)/xApps (110), processing speed considerations for closed loop operation, degree of information accuracy tolerance based on the level of network level abstraction, a need for simulation-based models, learning/prediction-based models, and a need for flat network models versus hierarchical network models.
[0039] The radio network sandbox (202) supports a network model for the radio network application (206) based on the selected RNAL and simulates the network model at the selected RNAL. The radio network sandbox (202) learns and refines the network model based on observed data from a real network (112) and predicts a new network state based on a learned and refined network model. The radio network sandbox (202) presents an observable and controllable interface to both the radio network application (206) and the radio network application platform (204). The observable and controllable interfaces are provided directly to the radio network application (206) or through the radio network application platform (204). The radio network sandbox (202) provides observable outputs based on simulation and/or prediction and provides controllable inputs for configuration of the network model.
[0040] Further, the radio network sandbox (202) creates new synthetic network data and inputs the same to the RNA (206) and the RNAP (204) based on the network model at the selected RNAL to mimic data from the real network (112). The radio network sandbox (202) allows control loop closure by allowing the RNA (206) to take action based on the synthetic network data to dynamically modify the state of the network model.
[0041] The RNS (202) provides a facility to learn a network model based on one or more RNALs (Radio Network Abstraction Levels) associated with the real network (112). As an example, the RNS (202) provides the facility for learning the network model based on the RNAL associated with network KPIs (key performance indicators), where modeling (modelling) of the network KPI is based on a quasi-stationary random process model for the network KPI within a time window. The network KPIs may include metrics such as the Accessibility, Cell Throughput (downlink/uplink), Cell Utilization, Physical Resource Block (PRB) Utilization, Cell/Node Backhaul Utilization, Number of User Equipments (UEs) per cell, Retainability, etc. The learned model (aka “learned network model”) for the network KPIs can be used to predict future values of network KPI data (e.g., RSRP, RSRQ, RSSI, SINR, PCC PHY DL Throughput, the PDCP DL Throughput or the like) in the RNS (202). The learned model can also be used to determine static or dynamic thresholds for the network KPIs based on random process parameters (such as mean and variance) for a KPI in a time window, where crossing of such a threshold can indicate an anomaly. Depending on the KPI, such thresholds can be crossed in either an increasing direction (an upper threshold) and/or in a decreasing direction (a lower threshold) to indicate an anomaly for that KPI. With the knowledge of the parameters for a random process model for the KPI, and the thresholds, the RNS (202) can synthesize a new time series for a network KPI with induced anomalies, where such a synthesized time series can be used to test the RNA (206).
[0042] The learned model can be stored in the RNS (202) to be accessible by the RNA (206) or shared directly by the RNS (202) with the RNA (206). The learned model can then be used by the RNA (206) for intelligent processing of network data inputs to the RNA (206), where such inputs to the RNA (206) can come from the real network (112) or from a simulated network in the RNS (202). The RNA (206) can subsequently determine a possible action for the network based on the processing in the RNA (206). The possible action can be one of generating an alert or a mitigation action or to perform proactive intelligent optimization and adaptation. As an example, the RNA (206) can perform cross-correlation across the network KPIs when one or more anomalies associated with the network KPIs are detected based on the static or dynamic thresholds for the network KPIs, to then determine the mitigation action (if required) for the network (where the network can represent the real network (112) or for a simulated network in the RNS (202)). A dynamic threshold can represent a soft threshold in the system where crossing of such a threshold can indicate a need to reconfigure the network to enable the network to continue functioning well to serve users. Proactive intelligent optimization and adaptation can be performed when such a soft dynamic threshold is crossed that can help to perform actions on the network (such as turning on a capacity-enhancing cell if needed and turning off the capacity-enhancing cell when it is not required, or to adapt system parameters that impact mobility and traffic load balancing in the network) that enables the network to continue to function smoothly. The RNS (202) generates variants of network data inputs to create the one or more anomalies in a network data to test the RNA (206). The RNS (202) enables iterative control loop processing to iteratively simulate to generate outputs for the RNA (206) and the RNAP (204) and allows the RNA (206) to control the RNS (202) based on the output from simulation. The RNS (202) enables iterative refinement of the RNA (206) based on such iterative processing.
[0043] The RNS (202) aggregates observable data before communicating such data to the RNA (206) or to the RNAP (204). The RNA (206)/RNAP (204) sends configuration control messages directly to a network device (e.g., gNB, AMF entity or the like) that is represented in the RNS (202). The RNA (206)/RNAP (204) sends configuration control messages to an aggregate/proxy/common control device in the RNS (202) that controls different network devices in the RNS (202). The observable/controllable data can relate to a hierarchical and disaggregated network components in the real network (112). The network components can be represented in the RNS (202) in either a flat or hierarchical or a hybrid flat-and-hierarchical network model. The RNS (202) manages the inter-conversion of the data between different hierarchical or flat or hybrid flat-and-hierarchical representations of the same network.
[0044] The radio network sandbox (202) synthesizes data needed by one or more applications, data for network configurations both before and after control action by the rApp(s) (delay-tolerant rApp(s)) (106) and the xApp(s) (delay-sensitive xApp(s)) (110) and transforms the data to be pushed to the delay-tolerant rApp(s) (106) running in a Non-RT-RIC (104) and the delay-sensitive xApp(s) (110) running in a Near-RT-RIC (108) via at least one of an open radio access network (O-RAN) specified interface and a custom interface. The radio network sandbox (202) configures a network abstraction by the applications so that a user (e.g., operator or the like) of the radio network sandbox (202) makes changes to network characteristics by collecting control action information from the delay-tolerant rApp(s) (106) running in the Non-RT-RIC (104) and the delay-sensitive xApp(s) (110) running in the Near-RT-RIC (108) via at least one of the O-RAN specified interface and the custom interface, parses the control action information, and makes changes to the network abstraction and modifies the configuration of the network abstraction from a graphical user interface (GUI).
[0045] The radio network sandbox (202) provides a simulated radio access network (RAN) capability to develop, demonstrate, and validate the rApp(s) (106) and the xApp(s) (110). The radio network sandbox (202) focuses on data and functionality needed by the rApp(s) (106) and the xApp(s) (110). The radio network sandbox (202) generates data and supports automated control loop action from the rApp(s) (106) and the xApp(s) (110).
[0046] The radio network sandbox (202) performs the data analytics on measured data from at least one of performance management (PM) counters and key performance indicators (KPIs) to create network data models. The radio network sandbox (202) exposes an observable interface for the radio network application (206), where the observable interface is provided through the radio network application platform (204), where the observable data can comprise Fault Management (FM), Performance Management (PM), or Configuration Management (CM) data, including observed counters or computed metrics associated with a cellular network. The radio network sandbox (202) shares at least one of a network model and a network state, with the radio network application (206), where the sharing is provided over an observable interface between the radio network sandbox (202) and the radio network application (206) or the radio network application platform (204).
[0047] The RNS (202) generates at least one synthetic network data projected from existing network data, wherein the existing network data is received from the real network (112) (e.g., RAN, 4G, 5G, or the like). The existing network data is received continuously. The synthetic network data is generated to check and test the ability that how an RNA (206) is responding to changes in different network conditions. The at least one synthetic network data is generated by observing key performance indicator (KPI) time series information collected from the real network (112) over a period of time, generating at least one synthetic KPI time series information based on the observed KPI time series information and a plurality of what-if scenarios, creating at least one anomaly based on integration with at least one real time data stream due to a significant challenge of data gathering, data filtering and data processing in real-time, device malfunctioning and poor calibration, and inserting the at least one anomaly in the at least one generated synthetic KPI time series information. The plurality of what-if scenarios are powered by the at least one real time data stream and are extracted form real network data sets to create different use cases and test how the radio network application (206) is behaving in different scenarios. For example, if the downlink throughput KPI is ok, but accessibility and/or retainability KPIs are experiencing degradation, then the root-cause can be determined to be a problem with the availability of radio resources in the network. However, if downlink throughput increases beyond its expected dynamic threshold and reaches a flat saturated value, while accessibility and retainability KPIs have good values, then one could explore whether there is an issue with the dynamic availability of backhaul resources in the network.
[0048] For example, FIGS. 3a-3b are graphs representing synthetically generated data (PRB Utilization and Accessibility KPIs) for a single coverage cell in a sector, where the PRB Utilization KPI regularly reaches saturation and the Accessibility KPI experiences a significant drop at those times. The data shown in the graphs with the x-axis corresponding to 5-minute reporting period (rop) window measurements for a total of 864 measurements across 3 consecutive days. The Y-axis corresponds to the PRB Utilization KPI and the Accessibility KPI at those times in the two graphs (FIG. 3a and FIG. 3b). Such data can indicate to an rApp (106) that there is a capacity issue in the network with a single cell mode of operation in the sector.
[0049] For example, FIGS. 4a-4d are graphs representing synthetically generated PRB Utilization and Accessibility KPIs for a dual cell mode of operation with both coverage and capacity cells ON. If the network model is configured for simulation with two cells turned on (i.e., coverage and capacity-enhancing cells) to provide coverage in the same region, then one can study the improvement in the network KPIs for such a network configuration. FIGS. 4a-4d represents the KPI metrics for PRB Utilization and Accessibility for both cells in the dual cell mode of operation for the same duration of operation across 3 days. Good values for the Accessibility metric in both the cells are observed, and a non-saturated value of PRB utilization in both the cells is also observed with a fair equitable distribution of network resources across both cells, indicating that the wireless network (200) is performing well and serving its users well.
[0050] Based on such data provided by the RNS (202), the RNA (206) (rApps (106) or xApps (110)) is designed to learn the behavior of the wireless network (200) for different configurations of the wireless network (200), to help in configuring the wireless network (200) well, at different times. For example, an energy savings rApp (106) using the iterative refinement learning technique will utilize such data patterns (as shown in FIGS. 4a-4d) to iteratively evaluate the network performance and energy usage from closed loop feedback based on various configurations of the wireless network (200). The rApp (106) can thereby learn to configure the wireless network (200) into a dual cell mode of operation during times when the PRB utilization increases beyond a certain threshold, and it can also learn to turn off the capacity cell, switching to a single cell mode of operation when the PRB utilization falls below a threshold, thus saving energy in the wireless network (200).
[0051] The at least one synthetic KPI time series information analyses a KPI curve and an abnormal behaviour of KPIs that imply at least one potential fault in the deployed radio network application (206). The at least one potential fault can comprise variations in KPIs such as increasing access latency or downlink/uplink queueing latencies, a network connectivity failure, and/or sharp decreases in accessibility or retainability metrics or delivered downlink/uplink throughput for one or more users in the network.
[0052] Based on the at least one generated synthetic network data, the RNS (202) temporarily switches the deployed RNA (206) between the real network (112) and the RNS (202) for an in-service validation. In other words, the RNA (206) can temporarily be switched to the RNS (202) from the real network (112) to revalidate the RNA (206) in runtime (i.e., radio network application runtime). Alternatively, the deployed RNA (206) is temporarily switched between the real network (112) and the RNS (202) as needed for at least one of in-service, training, re-validation, and re-testing. The temporarily switching is performed based on an as-needed basis.
[0053] Further, the RNS (202) creates the network model based on at least one KPI time series of real data received from the real network (112) over the period of time, where the at least one received KPI time series are observed and synthesized for creating the network model, wherein the network model generates the at least one synthetic network data projected from the existing network data and provides temporarily switching of the deployed RNA (206) between the real network (112) and the RNS (202) for the in-service validation based on the as-needed basis. Further, the RNS (202) continuously refines the network model. The network model is refined by continuous guidance and performance feedback received from the deployed RNA (206) and the RNAP (204), wherein the network model is executed in the RNS (202). The learning of the network model is done based on the real network KPI data received continuously, wherein the learning of the network model is monitored over the period of time. The network model receives a real network KPI data as an input and generates the at least one synthetic network data as an output. The network model exists within the RNS (202) being created, continuously monitored, refined, deployed, and terminated based on the need for learning, tuning, and validating. The learned network model can be refined periodically based on new real time data, where the learned network model can also be weighted to reflect the behavior associated with more recent data if desired. Since different AI/ML models may be required for different use-cases, and new use-cases may continuously need to be supported, transfer learning could be utilized as well to accelerate the generation of an AI/ML-based learned network model for the new use-cases.
[0054] Further, the RNS (202) allows the deployed RNA (206) to control the RNS (202) based on at least one of a simulated output for the network model and performance tuning and optimizing the network model for operating at least one real data set. The RNS (202) is cutover from the real network (112) and the RNAP (204) after the deployed RNA (206) is validated within the RNAP (204). The deployed RNA (206) supports the RNAP (204) that monitors and controls the RNS (202) through an observable/controllable (O/C) interface. The RNS (202) inserts at least one anomaly in the at least one synthetic network data for finding at least one anomalous event in at least one real time data stream, wherein the at least one anomalous event comprises at least one of an unexpected event and a rare event. The unexpected events can be, for example, but not limited to an extreme weather event, critical services for public safety or the like and the rare event can be, for example, but not limited to an unpredicted fault/service failure of network entity/modules, mission critical service event (fire, earthquake or the like). The RNS (202) predicts an expected future network state (behavior), such as such as to determine dynamic thresholds for the network KPIs that could be utilized to perform KPI anomaly analysis in rApps (106) based on the learned model for the real network (112) that exists within the RNS (202). The future network state (behavior) is compared with live (real-time) network data, to determine any problems in the network dynamically.
[0055] Referring to FIG. 2b, the RNS (202) is developed with the real network information. The RNS is cutover from the real network (112) and the RNAP (204) after the RNA (206) exists within the RNAP (204) is validated.
[0056] Referring to FIG. 2c, the RNS (202) is developed with the real network information. The RNA (206) exists within the RNAP (204) is switched as needed to the RNS (202) for in-service re-validation and retesting.
[0057] FIG. 5 illustrates various hardware elements of the radio network sandbox (RNS) (202). The RNS (202) may include a processor (502), a memory (504), a network simulator (506) and a communicator (not shown). The processor (502) is configured to execute instructions stored in the memory (504) and to perform various processes related to the present disclosure. The communicator is configured for communicating internally between internal hardware components and with external devices via one or more networks. The memory (504) is configured to store instructions to be executed by the processor (502).
[0058] The network simulator (506) generates the at least one synthetic network data projected from the existing network data, wherein the existing network data is received from the real network (112). The existing network data is received continuously. Based on the at least one generated synthetic network data, the network simulator (506) temporarily switches the deployed radio network application (RNA) (206) between the real network (112) and the RNS (202) for the in-service validation.
[0059] Further, the network simulator (506) creates the network model based on the at least one KPI time series of real data received from the real network (112) over the period of time, wherein the at least one received KPI time series is observed and synthesized for creating the network model. The network model generates the at least one synthetic network data projected from the existing network data and provides temporarily switching of the deployed RNA (206) between the real network (112) and the network simulator (506) for the in-service validation based on the as-needed basis. Further, the network simulator (506) continuously refines the network model. The learning of the network model is done based on the network KPIs data received continuously, wherein the learning of the network model is monitored over the period of time. The network model receives the real network KPI data as the input and generates the synthetic network data as the output. The network model exists within the network simulator (506) being created, continuously monitored, refined, deployed, and terminated based on the need for learning, tuning, and validating.
[0060] Further, the network simulator (506) allows the deployed RNA (206) to control the network simulator (506) based on at least one of a simulated output for the network model and performance tuning and optimizing the network model for operating the at least one real data set. The network simulator (506) is cutover from the real network (112) and the RNAP (204) after the deployed RNA (206) is validated within the RNAP (204). The deployed RNA (206) supports the RNAP (204) that monitors and controls the RNS (202) through the observable/controllable (O/C) interface. The network simulator (506) inserts at least one anomaly in the at least one synthetic network data for finding at least one anomalous event in at least one real time data stream, wherein the at least one anomalous event comprises at least one of an unexpected event and a rare event. The network simulator (506) predicts the future network state (behavior) based on the learned model for the real network (112) that exists within the network simulator (506).
[0061] FIG. 6 illustrates an example environment (600) in which the network simulator (506) included in the radio network sandbox (RNS) (202) is used for artificial intelligence (AI)/machine learning (ML) applications. The environment (600) enables controlling of the RNS (202) by at least one of a network modeling operation, simulation operation, data storing operation, and data generation operation by utilizing a data collection unit (602), a data preparation unit (604), an AI/ML training unit (606), an AI/ML model management unit (608), an AI/ML interference unit (610), an AI/ML continuous operation unit (612) and an AI/ML assisted solution unit (614). The data preparation unit (604) and the AI/ML interference unit (610) are connected with the RNS (202). The operations and functions of the RNS (202) are already explained in FIG. 2a to FIG. 2c.
[0062] The data collection unit (602) collects data from various entities (e.g., Open Central Unit (O-CU), Open Distributed Unit (O-DU), Non-Real Time RAN Intelligent Controller (Non-RT-RIC) (104), Near-Real Time RAN Intelligent Controller (Near-RT-RIC) (108), or the like) over an O1 interface, an A1 interface and an E2 interface and stores the collected data in large repositories (e.g., data lake centralized repositories or the like). The data can be extracted upon a request. Further, the data collection unit (602) shares the collected data to the data preparation unit (604) and accordingly the data preparation unit (604) generates an AI/ML training data from the received data. The data preparation unit (604) feeds the AI/ML training data to the AI/ML training unit (606). The AI/ML training unit (606) handles the AI/ML training data associated with a training procedure corresponding to design time selection and training of an ML model/AI model in relation with a specific AI/ML assisted solution (specific use case) to be executed. An AI/ML designer may select and onboard the AI/ML model and relevant metadata into a Service Management and Orchestration (SMO). Utilizing the AI/ML training data, an AI/ML training host will initiate the model training. Once the model is trained and validated, the same is published back in an SMO catalogue.
[0063] The AI/ML model management unit (608) and the AI/ML interference unit (610) manage the AI/ML (or ML/AI) model. Once the AI/ML model is deployed and activated, an AI/ML online data is used for inference in AI/ML assisted solutions, which includes a 3rd Generation Partnership Project (3GPP) specific events/counters over O1/E2 interface, a non-3GPP specific events/counters (across all different Managed Elements) over O1/E2 interface, and enrichment information from the Non-RT-RIC (104) over the A1 interface. In certain scenarios involving severe degradation in the AI/ML model performance, a simple model update and redeployment may not be the suitable option. In order to avoid large impact on network, malfunctioning/malfunctioned AI/ML model (or models that are degrading) needs to be terminated using the AI/ML model management unit (608) and the AI/ML interference unit (610).
[0064] Based on the feedback and data received from various AI/ML inference host, the AI/ML continuous operation unit (612) can inform the AI/ML designer that an update is required to the current model. The AI/ML designer will initiate the model selection and training step, but with the existing trained model. Once a new model has been trained, it will be deployed, and the updated model will be used for AI/ML inference. The AI/ML assisted solution unit (614) supports various actions (e.g., Configuration management over O1, Control action/guidance over E2, Policy over A1 and E2). All action and the AI/ML performance feedback are passed to the AI/ML continuous operation unit (612).
[0065] FIG. 7 is a flow chart (700) illustrating a method for configuring and operating the radio network sandbox (202) in the wireless network (200).
[0066] . At step 702, the method includes receiving existing network data from a real network (112). The existing network data is received continuously. At step (704), the method includes generating the at least one synthetic network data projected from the existing network data. At step 706, the method includes temporarily switching the deployed radio network application (206) between the real network (112) and the RNS (202) for the in-service validation based on the at least one generated synthetic network data. The temporarily switching is performed based on an as-needed basis.
[0067] The various actions, acts, blocks, steps, or the like in the flow chart (600) may be performed in the order presented, in a different order or simultaneously. Further, in some implementations, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
[0068] FIG. 8 is a graph (800) representing a simulation of KPI data coming from the real network (112) in a continuous fashion. The notation “a” of FIG. 8 indicates the observed KPI time series collected from the network over the period of time. The notation “b” of FIG. 8 indicates the synthesized KPI time series (as shown in dark grey color) versus the observed KPI time series (as shown in light grey color). The synthesized data starts from a vertical black line.
[0069] FIG. 9 is a graph (900) representing a KPI anomaly creation by the radio network sandbox (202) and detecting the anomaly by an rApp (106) running in the Non-RT-RIC (104). The notation “a” of FIG. 9 indicates the synthesized KPI time series. The notation “b” of FIG. 9 indicates accessibility anomaly induced by the radio network sandbox (202). The anomaly is detected and alerted by the rApp (106) through O1 or custom interfaces. The rApp (106) creates suggestions for mitigation.
[0070] The embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
[0071] It will be apparent to those skilled in the art that other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention. While the foregoing written description of the invention enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The invention should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope of the invention. It is intended that the specification and examples be considered as exemplary, with the true scope of the invention being indicated by the claims.
[0072] The methods and processes described herein may have fewer or additional steps or states and the steps or states may be performed in a different order. Not all steps or states need to be reached. The methods and processes described herein may be embodied in, and fully or partially automated via, software code modules executed by one or more general purpose computers. The code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all of the methods may alternatively be embodied in whole or in part in specialized computer hardware.
[0073] The results of the disclosed methods may be stored in any type of computer data repository, such as relational databases and flat file systems that use volatile and/or non-volatile memory (e.g., magnetic disk storage, optical storage, EEPROM and/or solid state RAM).
[0074] The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
[0075] Moreover, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor device, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general-purpose processor device can be a microprocessor, but in the alternative, the processor device can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor device can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor device includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor device can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor device may also include primarily analog components. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
[0076] The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor device, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An exemplary storage medium can be coupled to the processor device such that the processor device can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor device. The processor device and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor device and the storage medium can reside as discrete components in a user terminal.
[0077] Conditional language used herein, such as, among others, "can," "may," "might," "may," “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain alternatives include, while other alternatives do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more alternatives or that one or more alternatives necessarily include logic for deciding, with or without other input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular alternative. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
[0078] Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain alternatives require at least one of X, at least one of Y, or at least one of Z to each be present.
[0079] While the detailed description has shown, described, and pointed out novel features as applied to various alternatives, it can be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the scope of the disclosure. As can be recognized, certain alternatives described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others.
, C , Claims:CLAIMS
We Claim:
1. A method for configuring and operating a radio network sandbox (RNS) (202), wherein the method comprising:
generating, by the RNS (202), at least one synthetic network data projected from existing network data, wherein the existing network data is received from a real network (112), wherein the existing network data is received continuously; and
temporarily switching, by the RNS (202), a deployed radio network application (RNA) (206) between the real network (112) and the RNS (202) for an in-service validation based on the at least one generated synthetic network data, wherein the temporarily switching is performed based on an as-needed basis.
2. The method as claimed in claim 1, wherein generating the at least one synthetic network data comprising:
observing key performance indicator (KPI) time series information collected from the real network (112) over a period of time; and
generating at least one synthetic KPI time series information based on the observed KPI time series information and a plurality of what-if scenarios, wherein the plurality of what-if scenarios are powered by at least one real time data stream.
3. The method as claimed in claim 2, comprises:
creating at least one anomaly based on integration with the at least one real time data stream due to a significant challenge of data gathering, data filtering and data processing in real-time, device malfunctioning and poor calibration; and
inserting the at least one anomaly in the at least one generated synthetic KPI time series information.
4. The method as claimed in claim 2, wherein the at least one synthetic KPI time series information analyses a KPI curve and an abnormal behaviour of KPIs that imply at least one potential fault in the deployed radio network application (RNA) (206), wherein the at least one potential fault can comprise variations in KPIs such as increasing access latency or downlink/uplink queueing latencies, a network connectivity failure, and/or sharp decreases in accessibility or retainability metrics, or delivered downlink/uplink throughput for one or more users in the network.
5. The method as claimed in claim 1, wherein the method comprises:
creating, by the RNS (202), a network model in the RNS (202) based on at least one KPI time series of real data received from the real network (112) over a period of time, wherein the at least one received KPI time series is observed and synthesized for creating the network model, wherein the network model generates the at least one synthetic network data projected from the existing network data and provides temporarily switching of the deployed RNA (206) between the real network (112) and the RNS (202) for the in-service validation based on the as-needed basis.
6. The method as claimed in claim 5, wherein the method comprises continuously refining the network model, wherein the network model is refined by continuous guidance and performance feedback received from the deployed RNA (206) and a Radio Network Application Platform (RNAP) (204), wherein the network model is executed in the RNS (202).
7. The method as claimed in claim 5, wherein the RNS (202) allows the deployed RNA (206) to control the RNS (202) based on at least one of a simulated output for the network model and performance tuning and optimizing the network model for operating at least one real data set.
8. The method as claimed in claim 5, wherein learning of the network model is done based on a real network KPI data received continuously, wherein the learning of the network model is monitored over the period of time.
9. The method as claimed in claim 5, wherein the network model receives a real network KPI data as an input and generates a synthetic network data as an output.
10. The method as claimed in claim 5, wherein the network model exists within the RNS (202) being created, continuously monitored, refined, deployed, and terminated based on the need for learning, tuning, and validating.
11. The method as claimed in claim 1, wherein the RNS (202) is cutover from the real network (112) and an RNAP (204) after the deployed RNA (206) is validated within the RNAP (204).
12. The method as claimed in claim 1, wherein the deployed radio network application (206) is temporarily switched between the real network (112) and the RNS (202) as needed for at least one in-service, training, re-validation, and re-testing.
13. The method as claimed in claim 1, wherein the RNS (202) inserts at least one anomaly in the at least one synthetic network data for finding at least one anomalous event in at least one real time data stream, wherein the at least one anomalous event comprises at least one of an unexpected events and a rare event.
14. The method as claimed in claim 1, wherein the deployed RNA (206) supports a Radio Network Application Platform (RNAP) (204) that monitors and controls the RNS (202) through an observable/controllable (O/C) interface.
15. The method as claimed in claim 1, wherein the RNS (202) predicts a future network state based on a learned network model for the real network (112) that exists within the RNS (202).
16. The method as claimed in claim 1, wherein the method comprises controlling the RNS (202) by at least one of a network modeling operation, simulation operation, data storing operation, and data generation operation.
17. A radio network sandbox (RNS) (202), comprising:
a processor (502);
a memory (504); and
a network simulator (506), coupled with the processor (502) and the memory (504), configured to:
generate at least one synthetic network data projected from existing network data, wherein the existing network data is received from a real network (112), wherein the existing network data is received continuously; and
temporarily switch a deployed radio network application (206) between the real network (112) and the RNS (202) for an in-service validation based on the at least one generated synthetic network data, wherein the temporarily switching is performed based on an as-needed basis.
Dated this 03rd Day of October, 2022
Signature:
Name: Arun Kishore Narasani
Patent Agent: IN/PA 1049
| # | Name | Date |
|---|---|---|
| 1 | 202213056823-STATEMENT OF UNDERTAKING (FORM 3) [03-10-2022(online)].pdf | 2022-10-03 |
| 2 | 202213056823-PROOF OF RIGHT [03-10-2022(online)].pdf | 2022-10-03 |
| 3 | 202213056823-POWER OF AUTHORITY [03-10-2022(online)].pdf | 2022-10-03 |
| 4 | 202213056823-FORM 1 [03-10-2022(online)].pdf | 2022-10-03 |
| 5 | 202213056823-DRAWINGS [03-10-2022(online)].pdf | 2022-10-03 |
| 6 | 202213056823-DECLARATION OF INVENTORSHIP (FORM 5) [03-10-2022(online)].pdf | 2022-10-03 |
| 7 | 202213056823-COMPLETE SPECIFICATION [03-10-2022(online)].pdf | 2022-10-03 |