Abstract: Disclosed A method (500) for generating a network performance report in a wireless communication network (100). The method (500) includes receiving (502), from a user device (104) through a first database (220), a request for generating the network performance report. the method further includes assigning (504) an instance number to the request. Further, the method includes triggering (506), using one or more processing engines (216), one or more task instances (308) based on the instance number of the request. Furthermore, the method includes fetching (508), from a second database (222), report data based on processing the one or more task instances (308) in parallel. Thereafter, the method includes generating (510) the network performance report based on the report data. Fig. 5
DESC:FORM 2
THE PATENTS ACT, 1970 (39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
SYSTEM AND METHOD FOR GENERATING A NETWORK PERFORMANCE REPORT IN A COMMUNICATION NETWORK
Jio Platforms Limited, an Indian company, having registered address at Office -101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
The following complete specification particularly describes the disclosure and the manner in which it is performed.
TECHNICAL FIELD
[0001] The embodiments of the present disclosure generally relate to the field of wireless communication networks. More particularly, the present disclosure relates to a system and a method for generating a network performance report in the wireless communication network.
BACKGROUND OF THE INVENTION
[0002] The subject matter disclosed in the background section should not be assumed or construed to be prior art merely due to its mention in the background section. Similarly, any problem statement mentioned in the background section or its association with the subject matter of the background section should not be assumed or construed to have been previously recognized in the prior art.
[0003] With the advancement in Fifth Generation (5G) telecommunication systems, a need for monitoring performance of wireless nodes attached to a network has largely increased. As a large quantity of network data is collected from the wireless nodes, there is a huge requirement of storing the network data across various databases and many servers are required to process the data across various the databases. Also, there is a need to convert the network data into network-related reports that are essential for a network management system to assess the quality of the network.
[0004] In the 5G telecommunication systems, the network data may comprise subscriber specific data and session specific data. A unified database is required for storing (not limited to) application, subscription, authentication, service authorization, policy data, session binding, and application state information. Depending on the need of the network management system, various reports may be generated. Availability of the network-related reports on a timely manner is a most important challenge faced by the network management system, as these reports are mandatory for assessing the network quality and to take further remedial measures in case of any degradation of the nodes. Also, to improve efficiency of the network and enrich customer experience, the timely availability of network-related reports is essential.
[0005] In conventional systems, the network-related reports are executed through a tool hosted on a server. However, the server has limited computational resources. Therefore, a generation of reports may fail at the server when the load on the server has increased. Further, the conventional systems do not perform parallel processing to perform the operations to make the generation of reports faster and reliable.
[0006] In light of the aforementioned challenges and considerations, there is a need for an improved system and method for generation of reports in the wireless communication network.
SUMMARY
[0007] The following embodiments present a simplified summary in order to provide a basic understanding of some aspects of the disclosed invention. This summary is not an extensive overview, and it is not intended to identify key/critical elements or to delineate the scope thereof. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
[0008] In an embodiment, a method for generating a network performance report in a wireless communication network is disclosed. The method includes receiving, by a reception module from a user device through a first database, a request for generating the network performance report. The method further includes assigning, by an assigning module, an instance number to the request. Further, the method includes triggering, by a processing module using one or more processing engines, one or more task instances based on the instance number of the request. Furthermore, the method includes fetching, by the processing module from a second database, report data based on processing the one or more task instances in parallel. Thereafter, the method includes generating, by a generation module, the network performance report based on the report data.
[0009] In some aspects of the present disclosure, each task instance of the one or more task instances comprises a task instance number.
[0010] In some aspects of the present disclosure, the method further includes querying, by a querying module using the task instance number, the second database during run-time to process the request using the one or more processing engines.
[0011] In some aspects of the present disclosure, the method further includes fetching, by the processing module, a set of network performance reports based on the query, wherein the set of network performance reports are associated with the corresponding task instance number and a report instance number.
[0012] In some aspects of the present disclosure, the network performance report is generated by fetching the report data from the set of network performance reports in a sequential order.
[0013] In some aspects of the present disclosure, the method further includes assigning, by the assigning module, the report instance number to the network performance report.
[0014] In some aspects of the present disclosure, the report instance number is assigned in a round-robin manner.
[0015] In some aspects of the present disclosure, the method further includes transmitting, by a transmission module, the network performance report to the user device.
[0016] In some aspects of the present disclosure, the one or more processing engines are scheduled to perform the plurality of task instances in parallel.
[0017] In another embodiment, disclosed a system for generating a network performance report in a wireless communication network. The system includes a reception module configured to receive, from a user device through a first database, a request for generating the network performance report. The system further includes an assigning module configured to assign an instance number to the request. Further, the system includes a processing module configured to trigger, using one or more processing engines, one or more task instances based on the instance number of the request. The processing module is further configured to fetch, from a second database, report data based on processing the one or more task instances in parallel. Furthermore, the system includes a generation module configured to generate the network performance report based on the report data.
BRIEF DESCRIPTION OF DRAWINGS
[0018] Various embodiments disclosed herein will become better understood from the following detailed description when read with the accompanying drawings. The accompanying drawings constitute a part of the present disclosure and illustrate certain non-limiting embodiments of inventive concepts disclosed herein. Further, components and elements shown in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. For the purpose of consistency and ease of understanding, similar components and elements are annotated by reference numerals in the exemplary drawings.
[0019] FIG. 1 illustrates a diagram depicting an environment of a wireless communication network, in accordance with an embodiment of the present disclosure.
[0020] FIG. 2 illustrates a block diagram of a system for generating a network performance report in the wireless communication network, in accordance with an embodiment of the present disclosure.
[0021] FIG. 3 illustrates a functional block diagram of one or more modules of the system, in accordance with an embodiment of the present disclosure.
[0022] FIG. 4 illustrates a block diagram depicting communication between an external database, a plurality of processing engines and a server database, in accordance with an embodiment of the present disclosure.
[0023] FIG. 5 illustrates a process flow diagram depicting a method for generating network performance reports in the wireless communication system, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF THE INVENTION
[0024] Inventive concepts of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which examples of one or more embodiments of inventive concepts are shown. Inventive concepts may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Further, the one or more embodiments disclosed herein are provided to describe the inventive concept thoroughly and completely, and to fully convey the scope of each of the present inventive concepts to those skilled in the art. Furthermore, it should be noted that the embodiments disclosed herein are not mutually exclusive concepts. Accordingly, one or more components from one embodiment may be tacitly assumed to be present or used in any other embodiment.
[0025] The following description presents various embodiments of the present disclosure. The embodiments disclosed herein are presented as teaching examples and are not to be construed as limiting the scope of the present disclosure. The present disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary design and implementation illustrated and described herein, but may be modified, omitted, or expanded upon without departing from the scope of the present disclosure.
[0026] The following description contains specific information pertaining to embodiments in the present disclosure. The detailed description uses the phrases “in some embodiments” which may each refer to one or more or all of the same or different embodiments. The term “some” as used herein is defined as “one, or more than one, or all.” Accordingly, the terms “one,” “more than one,” “more than one, but not all” or “all” would all fall under the definition of “some.” In view of the same, the terms, for example, “in an embodiment” refers to one embodiment and the term, for example, “in one or more embodiments” refers to “at least one embodiment, or more than one embodiment, or all embodiments.”
[0027] The term “comprising,” when utilized, means “including, but not necessarily limited to;” it specifically indicates open-ended inclusion in the so-described one or more listed features, elements in a combination, unless otherwise stated with limiting language. Furthermore, to the extent that the terms “includes,” “has,” “have,” “contains,” and other similar words are used in either the detailed description, such terms are intended to be inclusive in a manner similar to the term “comprising.”
[0028] In the following description, for the purposes of explanation, various specific details are set forth to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features.
[0029] The description provided herein discloses exemplary embodiments only and is not intended to limit the scope, applicability, or configuration of the present disclosure. Rather, the foregoing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing any of the exemplary embodiments. Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it may be understood by one of the ordinary skilled in the art that the embodiments disclosed herein may be practiced without these specific details.
[0030] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein the description, the singular forms "a", "an", and "the" include plural forms unless the context of the disclosure indicates otherwise.
[0031] The terminology and structure employed herein are for describing, teaching, and illuminating some embodiments and their specific features and elements and do not limit, restrict, or reduce the scope of the present disclosure. Accordingly, unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having ordinary skill in the art.
[0032] The various aspects including the example aspects are now described more fully with reference to the accompanying drawings, in which the various aspects of the disclosure are shown. The disclosure may, however, be embodied in different forms and should not be construed as limited to the aspects set forth herein. Rather, these aspects are provided so that this disclosure is thorough and complete, and fully conveys the scope of the disclosure to those skilled in the art. In the drawings, the sizes of components may be exaggerated for clarity.
[0033] In the present disclosure, various embodiments are described using terms such as extensible radio access network (xRAN), and open-radio access network (O-RAN)) that are commonly used in communication standards (e.g., 3rd generation partnership project (3GPP), but these are merely examples for description. Various embodiments of the disclosure may also be easily modified and applied to other communication systems.
[0034] Various aspects of the present disclosure to provide a system and a method for generating a network performance report in a wireless communication network.
[0035] In another aspect of the present disclosure, the system and the method generate multiple network performance reports in parallel to ensure timely availability of reports to a network management system.
[0036] In another aspect of the present disclosure, the system and the method enable the network management system to analyze and perform a quick corrective action on poor performing network nodes to enrich customer experience.
[0037] Several key terms used in the description play pivotal roles in facilitating the system functionality. In order to facilitate an understanding of the description, the key terms are defined below.
[0038] A “Network Management System (NMS)” in the present disclosure may represent a system that enables operators to monitor and configure communication networks. The NMS identifies, configures, monitors, updates, and troubleshoots network devices in the communication network.
[0039] The term “Reference Signal Received Power (RSRP)” in the present disclosure may represent a linear average of reference signal power (in Watts) in resource elements that carry cell-specific reference signals within considered measurement frequency bandwidth.
[0040] The term “Received Signal Strength Indicator (RSSI)” in the present disclosure may be a measurement of total received power observed by the UE over a specific bandwidth. The measurement includes the power of a desired signal, interference, and noise. RSSI is used as an indicator of signal strength in conjunction with performance metrics like RSRP and Reference Signal Receive Quality (RSRQ).
[0041] The “RSRQ” in the present disclosure may be a quality metric represented as a ratio of the RSRP to the total RSSI in a measured bandwidth. In particular, the RSRQ indicates a quality of the signal relative to interference and noise.
[0042] The term “Signal-to-Interference-plus-Noise Ratio (SINR)” in the present disclosure may be a ratio of the signal power to the sum of interference and noise power, determining the minimum required value for successful packet reception in the communication networks.
[0043] An ‘Application Programming Interface (API)’ in the present disclosure may be a set of protocols that enable different components to communicate and transfer data. The API facilitates exchange of data, features, and functionalities between the components.
[0044] The present disclosure relates to a system and a method for generating network performance reports in the wireless communication network by generating multiple reports simultaneously using distributed file system and processing engines to ensure timely availability of reports to a network management system. This may further enable the network management system to analyze and perform a quick corrective action on poor performing network nodes to enrich the customer experience.
[0045] An end user intended to analyze the performance of the network may access a database comprising the network performance reports. The network performance reports may comprise speed, bandwidth usage, latency, packet loss, and other network Key Performance Indicators (KPIs) such as the RSRP, the RSSI, the RSRQ, and the SINR. The end user may request a network performance report to monitor the performance of nodes attached to the network. The end user may place an API call to front-end server to retrieve network information. The front-end server may transmit the request to a processing engine to obtain the network performance reports from the database or the distributed file system. This may cause the processing engine to trigger various task instances to access the database or the distributed file system to determine where the requested reports are stored. The processing engine may then provide the requested reports to the front-end server. The front-end server may fulfill the end user request by providing the requested reports.
[0046] The usage of distributed file systems and processing engines may have multiple advantages in the report generation process such as timely creation and availability of reports, parallel processing of multiple data across servers and multiple report creations. Also, the scalability of report generation may be adjusted based on the need and availability of computational resources and memory. Generation of multiple instances of task to perform report generation creates multiple reports in parallel. A cluster of processing engines may be employed to store and process multiple data across servers. This may ensure safeguarding of data from server failure. Also, real-time processing of data and report generation may be possible by using a cluster of processing engines without affecting the critical functionalities of the network.
[0047] In this manner, the end user may ingest measurements from the network performance reports for various metrics from a variety of sources, compute real time analytics and provide measurements to customers, administrators, and other entities to enable rapid response to any issue in a short amount of time. In addition, the server may forward the network performance report request to multiple processing engines which in turn trigger multiple task instances. The request from the end user may be divided and placed into logical partitions of the server for distribution to the various partitioner processing engines/modules, thereby any failure of a processing engines/modules may result in minimal impact to the overall aggregation of the network performance reports. The logical partitions of the server may separate a single physical server into two or more virtual servers, with each virtual server able to run independent applications or workloads. Each logical partition may act as an independent virtual server with the processing engine/module, and can share the memory, processors, databases, and other functions of the physical server system with other logical partitions. As the server may receive multiple requests from multiple end users simultaneously, utilizing the logical partitioning may enable the server to perform multiple tasks simultaneously. This may ensure that the end users may obtain the network performance reports in a huge volume quickly with a minimal risk of significant data loss.
[0048] FIG. 1 illustrates a diagram depicting an environment of a wireless communication network 100, in accordance with an embodiment of the present disclosure. The wireless communication network 100 includes coverage regions 106-1 to 106-N (hereinafter cumulatively referred to as the coverage region 106). Each coverage region is served by multiple Base Station (BS) 102-1 to 102-N (hereinafter cumulatively referred to as the BS 102). The base stations 102 serves one or more of at least one User Equipment (UE) 104-1 to 104-N (hereinafter cumulatively referred to as the UE 104) in the coverage region 106. The base stations 102 are connected to a network 108 to provide one or more services to the UE 104. The wireless communication network 100 further includes a server 110 connected to the network 108. The server 110 is configured to execute data processing and data storing operations to generate network performance reports in the wireless communication network 100.
[0049] The BS 102 may be at least one relay, and at least one Distributed Unit (DU). Typically, the BS 102 may be a network infrastructure that provides wireless access to one or more terminals. The BS 102 has coverage defined to be a predetermined geographic area based on the distance over which a signal may be transmitted. The BS 102 may be referred to as, in addition to “base station”, “network nodes”, “access point (AP)”, “evolved NodeB (eNodeB or eNB)”, “5G node (5th generation node)”, “next generation NodeB (gNB)”, “wireless point”, “transmission/reception point (TRP)”, “Radio Access Network (RAN)” or other terms having equivalent technical meanings.
[0050] The UE 104 may be, at least one DU, at least one Mobile Termination (MT) unit, and at least one relay. Typically, the term “user equipment” or “UE” can refer to any component such as “mobile station”, “subscriber station”, “remote terminal”, “wireless terminal”, “receive point”, or “end user device”.
[0051] The network 108 may include suitable logic, circuitry, and interfaces that may be configured to provide several network ports and several communication channels for transmission and reception of data related to operations of various entities of the wireless communication system 100. Each network port may correspond to a virtual address (or a physical machine address) for transmission and reception of the communication data. For example, the virtual address may be an Internet Protocol Version 4 (IPV4) (or an IPV6 address) and the physical address may be a Media Access Control (MAC) address. The network 108 may be associated with an application layer for implementation of communication protocols based on one or more communication requests from the various entities of the wireless communication system 100. The communication data may be transmitted or received via the communication protocols. Examples of the communication protocols may include, but are not limited to, Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Domain Network System (DNS) protocol, Common Management Interface Protocol (CMIP), Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Long Term Evolution (LTE) communication protocols, or any combination thereof. In some aspects of the present disclosure, the communication data may be transmitted or received via at least one communication channel of several communication channels in the network 108. The communication channels may include, but are not limited to, a wireless channel, a wired channel, a combination of wireless and wired channel thereof. The wireless or wired channel may be associated with a data standard which may be defined by one of a Local Area Network (LAN), a Personal Area Network (PAN), a Wireless Local Area Network (WLAN), a Wireless Sensor Network (WSN), Wireless Area Network (WAN), Wireless Wide Area Network (WWAN), a metropolitan area network (MAN), a satellite network, the Internet, an optical fiber network, a coaxial cable network, an infrared (IR) network, a radio frequency (RF) network, and a combination thereof. Aspects of the present disclosure are intended to include or otherwise cover any type of communication channel, including known, related art, and/or later developed technologies.
[0052] The server 110 may be a network of computers, a software framework, or a combination thereof, that may provide a generalized approach to create a server implementation. Examples of the server 110 may include, but are not limited to, personal computers, laptops, mini-computers, mainframe computers, any non-transient and tangible machine that can execute a machine-readable code, cloud-based servers, distributed server networks, or a network of computer systems. The server 110 may be realized through various web-based technologies or any web-application framework. In other aspects of the present disclosure, the server 110 may be configured to generate network performance reports.
[0053] FIG. 2 illustrates a block diagram of a system 200 for generating a network performance report in the wireless communication network 100, in accordance with an embodiment of the present disclosure. The system 200 includes the network 108, the BS 102, the UE 104, the server 110, an external database 222, a Network Management System (NMS) 226, and a distributed file system 224.
[0054] The server 110 includes a communication interface 210, a processor 212, a memory 218 coupled to the processor 212, and a server database 220. The processor 212 may control the operation of the server 110. The processor 212 may include a plurality of modules 214 (hereinafter also referred to as the “modules 214”) and a plurality of processing engines 216 (hereinafter also referred to as the “processing engine 216”). The processor 212 may also be referred to as a Central Processing Unit (CPU). The memory 218 may provide instructions and data to the processor 212 for performing functions of the server 110. The memory 218 may include a Random Access Memory (RAM), a Read-Only Memory (ROM) and a portion of the memory 218 may also include Non-Volatile Random Access Memory (NVRAM). The processor 212 may perform logical and arithmetic operations based on instructions stored within the memory 218. The communication interface 210 may allow transmission and reception of data between the server 110 and the network 108. The communication interface 210 may include a transmitter, a receiver, and a single or a plurality of transmit antennas electrically coupled to the transmitter and the receiver.
[0055] The communication interface 210 may be configured to enable the server 110 to communicate with various entities of the system 200 via the network 108. Examples of the communication interface 210 may include, but are not limited to, a modem, a network interface such as an Ethernet card, a communication port, and/or a Personal Computer Memory Card International Association (PCMCIA) slot and card, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, a coder-decoder (CODEC) chipset, a subscriber identity module (SIM) card, and a local buffer circuit. It will be apparent to a person of ordinary skill in the art that the communication interface 210 may include any device and/or apparatus capable of providing wireless or wired communications between the server 110 and various other entities of the system 200.
[0056] The processing engine 216 may process data retrieved from storage devices, based on pre-defined logic, to produce a result. The processing engine 216 may use data processing pipelines and process the data on frameworks in an optimized way, in a streaming or a batch mode, on-premises or in the cloud. The processing engine 216 may be capable of processing, automating, developing, and iterating a task instructed by the processor 212. The processing engine 216 may execute batch data processing jobs to extract, transform and load data from any source to any destination. The processing engine 216 may create workflows and schedule them to automate data-intensive tasks. The processing engine 216 may develop a code and run any custom script to perform the task assigned by the processor 212.
[0057] In some aspects of the present disclosure, the server 110 may be coupled to the external database 222 that provides data storage space to the server 110. The external database 222 may store information related to configuration parameters, details related to the nodes 102 and other relevant information needed for the operation of the server 110. The external database 222 may be accessed and updated by the server 110 as part of alert generation process. The external database 222 may correspond to a centralized database system configured to store and manage structured data, such as network-related data and configurations. The database 222 may be a relational database organizing related data such as in a table, or a non-relational database organizing graphical and time series data.
[0058] The UE 104 may include a processor 202, a memory 204 coupled to the processor 202, a communication interface 206, and a display 208. The processor 202 may control the operation of the UE 104. The processor 202 may also be referred to as the CPU. The memory 204 may provide instructions and data to the processor 202 for performing several functions. The memory 204 may include a Random Access Memory (RAM), a Read-Only Memory (ROM), and a portion of the memory 204 may also include Non-Volatile Random Access Memory (NVRAM). The processor 202 may perform logical and arithmetic operations based on instructions stored within the memory 204. The communication interface 206 may allow transmission and reception of data between the UE 104 and the network 108. The communication interface 206 may include a transmitter, a receiver, and a single or a plurality of transmit antennas electrically coupled to the transmitter and the receiver.
[0059] The UE 104 may further be capable of displaying (or presenting) results determined by the server 110 to a user through a console (not shown) on the UE 104 hosted by the server 110. The console on the UE 104 may be configured as a computer-executable application, to be executed by the UE 104. The console may include suitable logic, instructions, and/or codes for executing various operations and may be controlled by the server 110. The one or more computer executable applications may be stored on the UE 104.
[0060] The distributed file system 224 may be integrated with the server 110 for storing the report generated by the processor 212 and other operational data. The distributed file system 224 may be configured to provide a scalable and fault-tolerant storage system, capable of handling entire operation specific data across distributed clusters of files associated with the server 110.
[0061] The processors 202 and 212 may include one or more general purpose processors and/or one or more special purpose processors, a microprocessor, a digital signal processor, an application specific integrated circuit, a microcontroller, a state machine, or ay any type of programmable logic array. The processors 202 and 212 may include may include an intelligent hardware device including a general-purpose processor, such as, for example, and without limitation, a Central Processing Unit (CPU), an Application Processor (AP), a dedicated processor, or the like, a graphics-only processing unit such as a Graphics Processing Unit (GPU), a microcontroller, a Field-Programmable Gate Array (FPGA), a programmable logic device, a discrete hardware component, or any combination thereof. The processors 202 and 212 may be configured to execute computer-readable instructions stored in the memories 204 and 218 to cause the server 110 to perform various functions.
[0062] The memories 204 and 218 may further include, but not limited to, non-transitory machine-readable storage devices such as hard drives, magnetic tape, floppy diskettes, optical disks, compact disc read-Only Memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, RAMS, programmable read-only memories PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions.
[0063] In addition, the memory may, in some examples, be considered a non-transitory storage medium. The "non-transitory" storage medium is not embodied in a carrier wave or a propagated signal. However, the term "non-transitory" should not be interpreted as the memory is non-movable. In some examples, the memory may be configured to store larger amounts of information. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache). The memory may be an internal storage unit or an external storage unit of the server, cloud storage, or any other type of external storage.
[0064] Embodiments of the present technology may be described herein with reference to flowchart illustrations of methods and systems according to embodiments of the technology, and/or procedures, algorithms, steps, operations, formulae, or other computational depictions, which may also be implemented as computer program products. In this regard, each block or step of the flowchart, and combinations of blocks (and/or steps) in the flowchart, as well as any procedure, algorithm, step, operation, formula, or computational depiction can be implemented by various means, such as hardware, firmware, and/or software including one or more computer program instructions embodied in computer-readable program code. As will be appreciated, any such computer program instructions may be executed by one or more computer processors, including without limitation a general-purpose computer or special purpose computer, or other programmable processing apparatus to perform a group of operations comprising the operations or blocks described in connection with the disclosed methods.
[0065] Further, these computer program instructions, such as embodied in computer-readable program code, may also be stored in one or more computer-readable memory or memory devices (for example, the memories 204 and 218) that can direct a computer processor or other programmable processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory or memory devices produce an article of manufacture including instruction means which implement the function specified in the block(s) of the flowchart(s).
[0066] It will further be appreciated that the term “computer program instructions” as used herein refer to one or more instructions that can be executed by the one or more processors (for example, the processors 202 and 212) to perform one or more functions as described herein. The instructions may also be stored remotely such as on a server, or all or a portion of the instructions can be stored locally and remotely.
[0067] Although FIG. 1 and FIG. 2 illustrate one example of the system 100, various changes may be made to FIG. 1 and FIG. 2. For example, the system 100 may include any number of user devices in any suitable arrangement. Further, in another example, the server 110 may include any number of components in addition to the components shown in FIG. 2. Further, various components in FIG. 1 and FIG. 2 may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
[0068] FIG. 3 illustrates a functional block diagram 300 of one or more modules of the system 200, in accordance with an embodiment of the present disclosure. The one or more modules 214 of the processor 212 may comprise a reception module 302, an assigning module 304, a processing module 306, a generation module 310, a querying module (not shown) and a transmission module 312.
[0069] In one embodiment, the server 110 may connect to various applications and retrieve data through an Application Programming Interface (API). The server 110 may receive the report request from the UE 104 through the reception module 302.
[0070] The processing module 306 of the processor 212 may be configured to fetch the requested report from the server database 220. When the requested report is not included in the server database 220, the processor 212 may forward the request to the processing engine 216. The processing engine 216 may trigger the processing module 306 to create a plurality of task instances 308-1 to 308-N (collectively referred to as task instance 308).
[0071] The task instance 308 may be a database record to store an execution of a particular task. The task instance 308 may store the requested report from the UE 104. Once the task instance 308 is loaded with the requested report, the processing engine 216 may schedule the task instance 308 to perform multiple tasks. The task instance 308 may be assigned with a task instance number. The querying module may be configured to query, using the task instance number, the external database 222 during run-time to process the request using the processing engine 216.
[0072] The assigning module 304 may assign an instance number to every report in the server database 220 with an instance number. The task instance 308 uses the task instance number to query the server database 220 for any new pending report request. Upon receiving the report request, the task instance 308 may use the instance number to fetch a result set of reports from the distributed file system 224 integrated to the external database 222. The task instance 308 may retrieve the result set of reports matching the task instance number of the task instance 308 and the instance number of the report. The task instance 308 may fetch requested report one by one from the result set of reports retrieved from the distributed file system 224. The generation module 310 may generate the report. The report may be stored in the server database 220. The transmission module 312 may transmit the requested report to the UE 104.
[0073] In some aspects of the present disclosure, the UE 104 may comprise a transmission module 314 and a reception module 316. The transmission module 314 may be configured to transmit a request for generating a network performance report to the server 110. The reception module 316 may be configured to receive the network performance report from the server 110.
[0074] FIG. 4 illustrates a block diagram 400 depicting communication between the external database 222, the plurality of processing engines 216 (or processing engine 216), and the server database 220, in accordance with an embodiment of the present disclosure. The processor 212 may be configured to communicate with plurality of processing engines 216. The plurality of processing engines 216 may communicate with the server database 220 and the external database 222. The plurality of processing engines 216 performs one or more functions of the processing module 306.
[0075] The processor 212 may configure the plurality of processing engines 216 to perform multiple tasks by scheduling the plurality of task instances 308 in the plurality of processing engines 216. The task instance 308 in the plurality of processing engines 216 may perform parallel processing and may be capable of fetching multiple set of data from the server database 220. The plurality of processing engines 216 may include multiple groups of processors which may transmit the multiple set of data to the external database 222 for storage and processing purposes.
[0076] In a non-limiting example, the UE 104 may be intended to analyze the performance of the network 108 by accessing the network performance reports stored in the server database 220. The network performance reports may comprise speed, bandwidth usage, latency, packet loss, and other network KPIs such as the RSRP, the RSRQ, and the SINR of the plurality of nodes 202. The network performance reports may be initially stored either in the external database 222 or in the distributed file system 224. Hence, the processor 212 of the server 110 may transmit a request to fetch the reports from the external database 222 or the distributed file system 224. The processor 212 may take more time to fetch the network performance reports one by one from the external database 222 or the distributed file system 224.
[0077] To overcome the delay in processing the request from the UE 104, the processor 212 of the server 110 may transmit the request to the plurality of processing engines 216 to obtain the network performance reports from the external database 222 or the distributed file system 224. This may cause the processing engine 216 to trigger various task instances 308 to access the external database 222 or the distributed file system 224 in parallel to fetch the network performance reports. The plurality of processing engines 216 utilizes multiple groups of processors in parallel to reduce the delay in fetching the network performance reports.
[0078] FIG. 5 illustrates a process flow diagram depicting a method 500 for generating a network performance report in the wireless communication system, in accordance with an embodiment of the present disclosure. The method 500 comprises a series of operation steps indicated by blocks 502 through 510.
[0079] At block 502, the processor 212 may receive from a user device through a first database, a request for generating the network performance report.
[0080] In some aspects of the present disclosure, the user device may be the UE 104 and the first database may be the server database 220. The server database 220 may store the request received from the UE 104.
[0081] In a non-limiting example, the processor 212 may receive the request from the UE 104 for obtaining the network performance reports from the server database 220. The network performance reports may comprise the speed, the bandwidth usage, the latency, the packet loss, and the other network KPIs of the plurality of nodes 102. The network performance reports may be stored in the external database 222 and/or the distributed file system 224.
[0082] At block 504, the processor 212 may assign an instance number to the request. At block 506, the processor 212 may trigger, using one or more processing engines 216, one or more task instances 308-1 to 308-N based on the instance number of the request.
[0083] In some aspects of the present disclosure, the processor 212 may assign the instance number to each request received from the UE 104. The server 110 may receive multiple requests from multiple UEs 104-1 to 104-N. The processor 212 may trigger the one or more task instances 308-1 to 308-N using the one or more processing engines 216. In a non-limiting example, the processor 212 may assign the task of fetching the reports related to the speed and the bandwidth usage of the plurality of nodes 102 to the task instance 308-1. The processor 212 may assign the task of fetching the reports related to the latency and the packet loss to the task instance 308-2. The processor 212 may assign the task of fetching the one or more KPIs of the plurality of the nodes 102 to the task instance 308-3.
[0084] In some aspects of the present disclosure, each task instance of the one or more task instances 308-1 to 308-N may comprise a task instance number. The processor 212 may assign the task instance number to each of the one or more task instances 308-1 to 308-N. The task instance number may be assigned based on the instance number of the request. In a non-limiting example, the task instance 308-1 may be assigned with the instance number as request1.instance_number=1 and the task instance 308-2 may be assigned with the instance number as request1.instance_number=2 and so on.
[0085] At block 508, the processor 212 may fetch, from a second database, report data based on processing the one or more task instances 308-1 to 308-N in parallel. The processor 212 may trigger the one or more task instances 308-1 to 308-N simultaneously to perform the task of fetching the report from the second database. The one or more task instances 308-1 to 308-N may fetch the reports stored in the second database in parallel at the same time. The processing engine 216 may be process the one or more task instances 308-1 to 308-N. The processing engine 216 may be at least one partition of a plurality of logical partitions of the processor 212. The processing engine 216 may inherit the capability of the processor 212 to process the one or more task instances 308-1 to 308-N. In a non-limiting example, the processor 212 may process the task instance 308-1, the task instance 308-2, and the task instance 308-3, to fetch the reports related to the speed, the bandwidth usage, the latency, the packet loss, and the other network KPIs of the plurality of nodes 102 stored in the second database.
[0086] In some aspects of the present disclosure, the one or more processing engines 216 may be scheduled to perform the plurality of task instances 308 in parallel. As the one or more processing engines 216 may perform as individual units, the delay in fetching the network performance reports is reduced by processing the one or more task instances 308-1 to 308-N in parallel. In a non-limiting example, the one or more processing engines 216 may be scheduled to handle the requests received from the multiple UEs 104-1 to 104-N parallelly. Each of the one or more processing engines 216 may trigger the plurality of task instances 308 in parallel.
[0087] In some aspects of the present disclosure, the second database may be the external database 222 or the distributed file system 224. The network performance report may be generated by fetching the report data from the set of network performance reports in a sequential order.
[0088] In some aspects of the present disclosure, the processor 212 may query, using the task instance number, the second database during run-time to process the request using the one or more processing engines 216.
[0089] In some aspects of the present disclosure, the processor 212 may fetch a set of network performance reports based on the query. The set of network performance reports may be associated with the corresponding task instance number and a report instance number. The processing engine 216 of the processor 212 may fetch the set network performance reports from the second database and store the reports in the first database as the report data in the sequential order.
[0090] In some aspects of the present disclosure, the processor 212 may assign the report instance number to the network performance report. The report instance number may be assigned in a round-robin manner. The assigning module 304 of the processor 212 may assign the report instance number in the round-robin manner for providing load balancing. This may give a uniform priority to all the reports. Each task instance 308 may receive the same CPU time and may ensure that every task is executed on time.
[0091] At block 510, the processor 212 may generate the network performance report based on the report data. In a non-limiting example, the processor 212 may store a consolidated network performance report as the report data in the first database. The consolidated network performance report may comprise the set of network performance reports related to the speed, the bandwidth usage, the latency, the packet loss, and the other network KPIs of the plurality of nodes 102 fetched from the second database and stored in the first database. The processor 212 may assign a report instance number to the consolidated network performance report.
[0092] In some aspects of the present disclosure, the processor 212 may transmit the network performance report to the user device. The processor 212 may fetch the consolidated network performance report from the first database using the report instance number to transmit the report to the UE 104.
[0093] In some aspects of the present disclosure, the processor 212 may query, using the task instance number, the server database 220 to process the request using the processing engine 216. The processor 212 may fetch a set of reports associated with the corresponding task instance number and the report instance number. The processor 212 may generate the network performance report by fetching the report data from the set of reports in a sequential order. The processing engine 216 may be scheduled to perform the plurality of task instances 308 in parallel.
[0094] The processor 212 may trigger the task instance 308 using the task instance number to query the external database 222 to process the request. The processing engine 216 may then provide the requested reports to the server 110. The server 110 may forward the requested reports to the UE 104. The task instance 308 may retrieve the set of reports associated with the corresponding task instance number and the report instance number. The task instance 308 may perform report generation operation by fetching the requested reports one-by-one from the set of reports.
[0095] In addition, the server 110 may forward the network performance report request to the plurality of processing engines 216 which in turn trigger multiple task instances 308. The request from the UE 104 may be divided and placed into logical partitions of the server 110 for distribution to the various partitioner processing engines, thereby any failure of a processing engine may result in minimal impact to the overall aggregation of the network performance reports. This may ensure that the UE 104 may obtain the network performance reports in a huge volume quickly with a minimal risk of significant data loss. The UE 104 may ingest measurements from the network performance reports for various metrics from a variety of sources, compute real time analytics and provide measurements to customers, administrators, and other entities to enable rapid response to any issue in a short amount of time.
[0096] In some aspects of the present disclosure, the processor 212 may generate multiple network performance reports in parallel to ensure timely availability of reports to the NMS 226. The NMS 226 may analyze and perform a quick corrective action on poor performing network nodes to enrich the customer experience.
[0097] Referring to the technical abilities and advantageous effect of the present disclosure, the disclosed system and method perform parallel processing using multiple processing engines thereby making the report generation process faster and reliable. Further, the disclosed system and method facilitate adjustment of scalability of report generation based on user requirements and availability of computational resources and memory. Also, the disclosed system and method improve the user experience by quickly analyzing the performance of nodes and performing corrective action on the poor performing network nodes.
[0098] Those skilled in the art will appreciate that the methodology described herein in the present disclosure may be carried out in other specific ways than those set forth herein in the above disclosed embodiments without departing from essential characteristics and features of the present disclosure. The above-described embodiments are therefore to be construed in all aspects as illustrative and not restrictive.
[0099] The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Any combination of the above features and functionalities may be used in accordance with one or more embodiments.
[0100] In the present disclosure, each of the embodiments has been described with reference to numerous specific details which may vary from embodiment to embodiment. The foregoing description of the specific embodiments disclosed herein may reveal the general nature of the embodiments herein that others may, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications are intended to be comprehended within the meaning of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and is not limited in scope.
LIST OF REFERENCE NUMERALS
[0101] The following list is provided for convenience and in support of the drawing figures and as part of the text of the specification, which describe innovations by reference to multiple items. Items not listed here may nonetheless be part of a given embodiment. For better legibility of the text, a given reference number is recited near some, but not all, recitations of the referenced item in the text. The same reference number may be used with reference to different examples or different instances of a given item. The list of reference numerals is:
100 - Wireless communication network
102 - Base Station (BS)
102-1 to 102-N - One or more BSs
104 - User Equipment (UE)
104-1 to 104-N -One or more UEs
106-1 to 106-N - Coverage region
108 - Network
110 - Server
200 -Block Diagram of the system for generating a network performance report
202 - Processor of the UE 104
204 - Memory of the UE 104
206 - Communication interface of the UE 104
208 - Display of the UE 104
210 - Communication interface of the server 110
212 - Processor of the server 110
214 - One or more modules of the processor 212
216 - Plurality of processing engines
218 - Memory of the server 110
220 - Server database
222 - External database
224 – Distributed file system
226 - Network Management System (NMS)
300 - Functional block diagram of one or more modules 214
302 - Reception module
304 - Assigning module
306 - Processing module
308-1 to 308-N – One or more task instances
310 - Generation module
312 - Transmission module
400 - Block diagram depicting communication between the external database 222, the plurality of processing engines 216 and the server database 220
500 - Method for generating a network performance report
502-510 - Operation steps of the method 500
,CLAIMS:1. A method (500) for generating a network performance report in a wireless communication network, the method (500) comprising:
receiving (502), by a reception module (302) from a user device (104) through a first database (220), a request for generating the network performance report;
assigning (504), by an assigning module (304), an instance number to the request;
triggering (506), by a processing module (306) using one or more processing engines (216), one or more task instances (308) based on the instance number of the request;
fetching (508), by the processing module (306) from a second database (222), report data based on processing the one or more task instances (308) in parallel; and
generating (510), by a generation module (310), the network performance report based on the report data.
2. The method (500) as claimed in claim 1, wherein each task instance of the one or more task instances (308) comprises a task instance number.
3. The method (500) as claimed in claim 2, further comprising querying, by a querying module using the task instance number, the second database (222) during run-time to process the request using the one or more processing engines (216).
4. The method (500) as claimed in claim 3, further comprising fetching, by the processing module (306), a set of network performance reports based on the query, wherein the set of network performance reports are associated with the corresponding task instance number and a report instance number.
5. The method (500) as claimed in claim 4, wherein the network performance report is generated by fetching the report data from the set of network performance reports in a sequential order.
6. The method (500) as claimed in claim 4, further comprising assigning, by the assigning module (304), the report instance number to the network performance report.
7. The method (500) as claimed in claim 4, wherein the report instance number is assigned in a round-robin manner.
8. The method (500) as claimed in claim 1, further comprising transmitting, by a transmission module (312), the network performance report to the user device (104).
9. The method (500) as claimed in claim 1, wherein the one or more processing engines (216) are scheduled to perform the plurality of task instances in parallel.
10. A system (200) for generating a network performance report in a wireless communication network, the system (200) comprising:
a reception module (302) configured to receive, from a user device (104) through a first database (220), a request for generating the network performance report;
an assigning module (304) configured to assign an instance number to the request;
a processing module (306) configured to:
trigger, using one or more processing engines (216), one or more task instances (308) based on the instance number of the request;
fetch, from a second database (222), report data based on processing the one or more task instances (308) in parallel; and
a generation module (310) configured to generate the network performance report based on the report data.
11. The system (200) as claimed in claim 10, wherein each task instance of the one or more task instances (308) comprises a task instance number.
12. The system (200) as claimed in claim 11, further comprising a querying module configured to query, using the task instance number, the second database (222) during run-time to process the request using the one or more processing engines (216).
13. The system (200) as claimed in claim 12, wherein the processing module (306) is further configured to fetch a set of network performance reports based on the query, wherein the set of network performance reports are associated with the corresponding task instance number and a report instance number.
14. The system (200) as claimed in claim 13, wherein the network performance report is generated by fetching the report data from the set of network performance reports in a sequential order.
15. The system (200) as claimed in claim 13, wherein the assigning module (304) is further configured to assign the report instance number to the network performance report.
16. The system (200) as claimed in claim 13, wherein the report instance number is assigned in a round-robin manner.
17. The system (200) as claimed in claim 10, further comprising a transmission module (312) configured to transmit the network performance report to the user device (104).
18. The system (200) as claimed in claim 10, wherein the one or more processing engines (216) are scheduled to perform the plurality of task instances in parallel.
19. A user device (104), comprising:
a transmission module (314) configured to transmit, to a server (110), a request for generating a network performance report; and
a reception module (316) configured to receive the network performance report based on a report data, wherein the server (110) performs the steps as claimed in claim 1 for generating the network performance report.
| # | Name | Date |
|---|---|---|
| 1 | 202421023389-STATEMENT OF UNDERTAKING (FORM 3) [25-03-2024(online)].pdf | 2024-03-25 |
| 2 | 202421023389-PROVISIONAL SPECIFICATION [25-03-2024(online)].pdf | 2024-03-25 |
| 3 | 202421023389-POWER OF AUTHORITY [25-03-2024(online)].pdf | 2024-03-25 |
| 4 | 202421023389-FORM 1 [25-03-2024(online)].pdf | 2024-03-25 |
| 5 | 202421023389-DRAWINGS [25-03-2024(online)].pdf | 2024-03-25 |
| 6 | 202421023389-DECLARATION OF INVENTORSHIP (FORM 5) [25-03-2024(online)].pdf | 2024-03-25 |
| 7 | 202421023389-FORM-26 [16-04-2024(online)].pdf | 2024-04-16 |
| 8 | 202421023389-Proof of Right [30-07-2024(online)].pdf | 2024-07-30 |
| 9 | 202421023389-FORM 18 [25-02-2025(online)].pdf | 2025-02-25 |
| 10 | 202421023389-DRAWING [25-02-2025(online)].pdf | 2025-02-25 |
| 11 | 202421023389-CORRESPONDENCE-OTHERS [25-02-2025(online)].pdf | 2025-02-25 |
| 12 | 202421023389-COMPLETE SPECIFICATION [25-02-2025(online)].pdf | 2025-02-25 |
| 13 | 202421023389-Request Letter-Correspondence [26-02-2025(online)].pdf | 2025-02-26 |
| 14 | 202421023389-Power of Attorney [26-02-2025(online)].pdf | 2025-02-26 |
| 15 | 202421023389-Form 1 (Submitted on date of filing) [26-02-2025(online)].pdf | 2025-02-26 |
| 16 | 202421023389-Covering Letter [26-02-2025(online)].pdf | 2025-02-26 |
| 17 | 202421023389-ORIGINAL UR 6(1A) FORM 1-030325.pdf | 2025-03-05 |
| 18 | Abstract.jpg | 2025-04-15 |