Abstract: ABSTRACT SYSTEM AND METHOD FOR MANAGING NETWORK FUNCTION DATA AT A CENTRALIZED CORE The present disclosure relates to a system (120) and a method (600) for managing the one or more network functions data. The method (600) includes the steps of receiving data from one or more network functions at a Machine Learning (ML) unit (235). The method (600) includes the step of processing utilizing the ML unit the data received from the one or more network functions based on one or more pre-defined policies received at the ML unit, the one or more pre-defined policies are associated with the data. The method (600) further includes the step of transmitting the processed data to the centralized core. Ref. Fig. 2
DESC: FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1.TITILE OF THE INVENTION
SYSTEM AND METHODS FOR MANAGING NETWORK FUNCTION DATA AT A CENTRALIZED CORE
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention generally relates to network function data, and more particularly relates to managing network function data at a centralized core.
BACKGROUND OF THE INVENTION
[0002] In current network architectures, network functions are installed in specific super cores. The super cores will be present in the different locations. Wherever the network function installed in the super core is running, the probing agent is installed over the network. Hence, the data, rule, configuration related to any network functions or its instance or any clear code cannot be accessed or analyzed from current super core to the other super core. For example, the probing agent running on the super core located in Mumbai cannot access or analyze the network functions or rule or any configuration related to any network function present in the super core located in the Delhi. Thus the accessing or analyzing any rule or configuration related to any network functions from the other super core located in different location is difficult. . Further, this limitation results in increased bandwidth costs, storage requirements, and the need for extensive resources to segregate and manage data from multiple super cores.
[0003] Therefore, there is a need for a solution that allows for centralized access and analysis of network function data from various super cores.
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provide a system and a method for managing network function data from a centralized core.
[0005] In one aspect of the present invention, the system for managing the one or more network functions data at the centralized core is disclosed. The system includes a receiving unit configured to receive data from one or more network functions at a machine learning (ML) unit. The system further includes a processing unit configured to process, utilizing the ML unit, the data received from the one or more network functions based on one or more pre-defined policies received at the ML unit. The one or more pre-defined policies are associated with the data. The system further includes a transmitting unit configured to transmit the processed data to the centralized core.
[0006] In one embodiment, the data of the one or more network functions is stored in a database as metadata.
[0007] In another embodiment, the data includes at least one of, Swap Data Repositories (SDR) data, a network function rule, a network function configuration, a network instance rule, a network instance configuration and clear codes.
[0008] In another embodiment, by storing the data in the database, an enabling unit 230 is configured to enable centralized policy provisioned mechanism control that facilitates in monitoring and controlling the one or more network functions from a single user interface independent of the location of the one or more network function.
[0009] In another embodiment, the one or more pre-defined policies are related to at least one of, segregation, filtering, enrichment, ingestion, filtering and aggregation of the data.
[0010] In another embodiment, the one or more pre-defined policies are remotely applied to the one or more network functions utilizing the ML unit.
[0011] In another embodiment, processing the data includes at least one of, aggregation and segregation of the data received from the one or more network functions.
[0012] In another aspect of the present invention, the method includes the steps of receiving data from one or more network functions at a Machine Learning (ML) unit. The method further includes processing utilizing the ML unit, the data received from the one or more network functions based on one or more pre-defined policies received at the ML unit, wherein the one or more pre-defined policies are associated with the data. The method further includes transmitting the processed data to the centralized core.
[0013] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive the data from one or more network functions at a Machine Learning (ML) unit. The processor is further configured to process, utilizing the ML unit, the data received from the one or more network functions based on one or more pre-defined policies received at the ML unit, wherein the one or more pre-defined policies are associated with the data. The processor is further configured to transmit the processed data to the centralized core.
[0014] In another aspect of the invention, a User Equipment (UE) is provided. The UE comprises, one or more primary processors communicatively coupled to one or more processors, the one or more primary processors coupled with a memory, wherein said memory stores instructions which when executed by the one or more primary processors causes the UE to transmit, one or more pre-defined policies pertaining to the one or more network functions to the one or more processors via a Machine Learning (ML) unit.
[0015] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0017] FIG. 1 is an exemplary block diagram of a communication system for managing network function data at a centralized core, according to various embodiments of the present invention;
[0018] FIG. 2 is a block diagram of a system for managing the network function data at the centralized core, according to various embodiments of the present invention;
[0019] FIG. 3 is schematic representation of a workflow of the system of FIG. 2, according to various embodiments of the present invention;
[0020] FIG. 4 is an architecture for managing the network function data at the centralized core which can be implemented in the system of FIG. 2, according to various embodiments of the present invention;
[0021] FIG. 5 is a signal flow diagram for managing the network function data at the centralized core, according to various embodiments of the present invention; and
[0022] FIG. 6 shows a flow diagram of a method for managing the network function data at the centralized core according to various embodiments of the present invention.
[0023] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0024] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0025] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0026] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the way in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0027] As per various embodiments depicted, the present invention discloses a system and a method for managing data pertaining to one or more network functions at a centralized core, The present invention addresses the challenges of increased bandwidth costs, storage requirements, and the need for extensive resources to segregate and manage network function data from multiple super cores. The solution facilitates central rule provisioning for global geography. The invention facilitates computation completion with minimum time interval at a centralized location. The invention reduces the effort of monitoring and keeping track of each computation such as but not limited to, Circle, Host, Instance and Cluster. The invention further helps in reducing the effort of monitoring and keeping track of each computation. The invention further helps in the usage of bandwidth and improves overall network edge node management efficiency. .
[0028] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of a communication system for managing the network function 405 (as shown in FIG. 4) data at a centralized core 250 (as shown in FIG. 2), according to one or more embodiments of the present disclosure. The communication system 100 includes a network 105, a User Equipment (UE) 110, a server 115, and a system 120. The UE 110 aids a user to interact with the system 120. In an embodiment, the UE 110 is one of but not limited to any electronic electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0029] For the purpose of description and explanation, the description will be explained with respect to the UE 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the first UE 110a, the second UE 110b, and the third UE 110c is configured to connect to the server 115 via the network 105. In alternate embodiments, the UE 110 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 110a, the second UE 110b, and the third UE 110c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 110”.
[0030] As per the illustrated embodiment, the communication system 100 includes one or more base stations 125. For the purpose of description and explanation, the description will be explained with respect to a first base station 125a, a second base station 125b, and a third base station 125c, and should nowhere be construed as limiting the scope of the present disclosure. For ease of reference, each of the first base station 125a, the second base station 125b, and the third base station 125c, will hereinafter be collectively and individually referred to as the “base station 125”.
[0031] The first base station 125a includes, by way of example but not limitation, a cell site, cell phone tower, or cellular base station. Each of the first base station 125a, the second base station 125b, and the third base station 125c is a cellular-enabled mobile device site where antennas and electronic communications equipment are placed (typically on a radio mast, tower, or other raised structure) to create a cell, or adjacent cells, in the communication network. The structure typically supports an antenna and one or more sets of transmitters/receivers, digital signal processors, control electronics, a GPS receiver for timing, primary and backup electrical power sources, and sheltering.
[0032] The network 105 may include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fibre optic network, or some combination thereof.
[0033] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fibre optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0034] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fibre optic network, a VOIP or some combination thereof.
[0035] The communication system 100 includes the server 115 accessible via the network 105. The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, or any other facility that provides service.
[0036] The communication system 100 further includes the system 120 communicably coupled to the server 115 and the UE 110 via the network 105. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity. However, for the purpose of description, the system 120 is illustrated as remotely coupled with the server 115, without deviating from the scope of the present disclosure.
[0037] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0038] FIG. 2 illustrates an exemplary block diagram of the system 120 for managing network function 405 data at the centralized core 250, according to one or more embodiments of the present disclosure. In one embodiment, the centralized core 250 refers to a fundamental architectural component designed to manage and orchestrate the one or more network functions 405 and services from a centralized location in the network 105.
[0039] As per the illustrated embodiment, the system 120 includes one or more processors 205, a memory 210, a user interface 215, a centralized core 250, and a database 220. For the purpose of description and explanation, the description will be explained with respect to one processor 205 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 120 may include more than one processor 205 as per the requirement of the network 105. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0040] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium which may be fetched and executed to display the enriched data to the user via the user interface in order to perform analysis. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0041] In an embodiment, the user interface 215 includes a variety of interfaces, for example, interfaces for data input and output devices, referred to as Input/Output (I/O) devices, storage devices, and the like. The user interface 215 facilitates communication of the system 120. In one embodiment, the user interface 215 provides a communication pathway for one or more components of the system 120.
[0042] In an embodiment, the database 220 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 220 types are non-limiting and may not be mutually exclusive e.g., the database can be both commercial and cloud-based, or both relational and open-source, etc.
[0043] In order for the system 120 to manage the data pertaining to the network function 405 at the centralized core 250, the processor 205 includes one or more modules. In one embodiment, the one or more modules includes, but not limited to, a receiving unit 225, an enabling unit 230, a Machine Learning unit (ML unit) 235, a processing unit 240, and a transmitting unit 245 communicably coupled to each other.
[0044] The receiving unit 225, the enabling unit 230, the ML unit 235, the processing unit 240, and the transmitting unit 245, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0045] The receiving unit 225 of the system 120 is configured to receive the data from one or more network functions 405 (as shown in FIG. 4). The data includes at least one of, a Swap Data Repositories (SDR) data, a network function rule, a network function configuration, a network instance rule, a network instance configuration and clear codes. The SDR data in the context of network operations denotes specialized repositories designed to efficiently store, retrieve, and manage operational data integral to the network 105 functionality.
[0046] In an embodiment, the SDR data refers to centralized data storage systems specifically engineered to meet the exacting specifications of the network 105. The SDR functions as fundamental hubs for efficiently managing and exchanging data critical to various factors of the network 105 operations such as but not limited to, network slicing, real-time data analytics, and service orchestration.
[0047] In an embodiment, the data of the one or more network functions 405 received is stored in the database 220 as metadata. In an embodiment, by storing the data in the database, an enabling unit 230 is configured to enable centralized policy provisioned mechanism control. On enabling the centralized policy provisioned mechanism control, the enabling unit 230 is configured to facilitate monitoring and controlling of the one or more network functions 405 from a single user interface independent of location of the one or more network functions 405. In one embodiment, the metadata, refers to structured or unstructured data that serves to provide contextual information about other data elements.
[0048] In one embodiment, each of the one or more network functions 405 refers to the various applications and services that are deployed within the network 105. The various applications and services may include at least one of, but not limited to, baseband processing, radio resource management, packet core functions, network slicing orchestration and management.
[0049] Upon receiving the data from the one or more network functions 405, the receiving unit 225 transmits the received data to the ML unit 235. The ML unit 235 is configured to apply machine learning techniques to analyze the collected data, extract patterns, make predictions or optimize network operations. The machine learning techniques includes at least one of, but not limited to, supervised learning, unsupervised learning, reinforcement learning, deep learning, Natural Language Processing (NLP). The applications that are enabled by the machine learning techniques includes at least one of, but not limited to, anomaly detection, predictive maintenance, network slicing optimization, traffic forecasting, radio resource management (RRM), service orchestration, and Quality of Experience (QoE) enhancement.
[0050] In one embodiment, the ML unit 235 upon receiving the data from the one or more network functions 405 processes the received data using machine learning algorithms. In order to process the data, the ML unit 235 is configured to perform tasks such as data preprocessing, feature extraction, model training, and inference. By doing so, the ML unit 235 learns patterns, correlations, or predictions based on the data received from the one or more network functions 405.
[0051] Thereafter, the processing unit 240, is configured to process, utilizing the ML unit 235, the data received from the one or more network functions 405 based on one or more pre-defined policies received at the ML unit 235. The one or more pre-defined policies are associated with the data. In an embodiment, the processing the data includes at least one of, aggregation and segregation of the received from the one or more network functions. The one or more pre-defined policies are related to at least one of segregation, filtering, enrichment, ingestion, filtering and aggregation of the network function data. The one or more pre-defined policies are remotely applied to the at least one network function 405 utilizing the ML unit 235.
[0052] Upon processing the data utilizing the one or more pre-defined policies, the transmitting unit 245 is configured to transmit the processed data to the centralized core 250 In one embodiment, the centralized core 250 refers to a fundamental architectural component designed to manage and orchestrate various the one or more network functions 405 and services from a centralized location in the network 105. The centralized core 250 facilitates the processed data to enhance operational efficiency, ensure service quality, and support the evolving demands of network operations in the network 105. In an embodiment, the centralized core 250 aggregates the one or more network functions 405 data from multiple super cores into a central location.
[0053] FIG. 3 describes a preferred embodiment of the system 120 of FIG. 2, according to various embodiments of the present invention. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 110a and the system 120 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0054] As mentioned earlier in FIG. 1, the UE 110 may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor. The exemplary embodiment as illustrated in FIG. 3 will be explained with respect to the first UE 110a without deviating from the scope of the present disclosure and limiting the scope of the present disclosure. The first UE 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 120.
[0055] The one or more primary processors 305 are coupled with a memory 310 storing instructions which are executed by the one or more primary processors 305 Execution of the stored instructions by the one or more primary processors 305 enables the first UE 110a to transmit, one or more pre-defined policies pertaining to the one or more network functions 405 to the one or more processors via the ML unit 235. The one or more pre-defined policies are related to at least one of, segregation, filtering, enrichment, ingestion, filtering and aggregation of the data.
[0056] As mentioned earlier in FIG. 2, the one or more processors 205 of the system 120 is configured for managing network function 405 data at the centralized core 250..
[0057] As per the illustrated embodiment, the system 120 includes the one or more processors 205, the memory 210, the user interface 215, and the database 220. The operations and functions of the one or more processors 205, the memory 210, the user interface 215, and the database 220 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0058] Further, the processor 205 includes the receiving unit 225, the enabling unit 230, the Machine learning unit 235, processing unit 240, and transmitting unit 245, are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 120 in FIG. 3, should be read with the description provided for the system 120 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0059] FIG. 4 is an exemplary architecture 400 which can be implemented in the system 120 of the FIG. 2 for managing the network function 405 data at the centralized core 250, according to one or more embodiments of the present invention. The exemplary embodiment as illustrated in the FIG. 4 includes one or more network functions 405, a ML probe 410, the user interface 215, a conductor 420.
[0060] The network function 405 of architecture 400 is configured to transmit the SDR data to the ML probe 410. In an embodiment, the ML probe 410 is similar to the ML unit 235 as explained in FIG. 2 and 3. The SDR data refers to centralized data storage systems specifically engineered to meet the exacting specifications of the network 105. The SDR functions as fundamental hubs for efficiently managing and exchanging data critical to various factors of the network 105 operations such as but not limited to, network slicing, real-time data analytics, and service orchestration. In an embodiment, a user defines filtering/enriching policies on the ML probe 410 via the user interface 215. The ML probe 410 is configured to enhance the network 105 performance, security, and operational efficiency via the intelligent data processing. The policies may include but not limited to, segregation and enrichment.
[0061] The ML probe 410 performs operations on the SDR data received from the network function 405 based on the policies defined by the user. The operations include one of, but not limited to, segregation and enrichment of the SDR data at an edge end and transmitting the segregated and enriched data to the conductor 420. In an embodiment, the conductor 420 is at least one of the centralized core 250.
[0062] The conductor 420 facilitates a centralized policy provisioned mechanism control. The centralized policy aids in monitoring and controlling network Edge nodes from a single interface, irrespective of the edge node’s location.
[0063] In an embodiment, the ML probe 410 allows to remotely configure, monitor, and troubleshoot network edge nodes. The architecture 400 facilitates smooth and secure network communication and operation. Each activity in the architecture 400 is performed over the network protocol and uses data from configurable server location.
[0064] FIG. 5 is an exemplary signal flow diagram for managing the network function 405 data at the centralized core 250, according to one or more embodiments of the present invention. For the purpose of description, the signal flow diagram is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0065] At step 505, the one or more network functions 405 transmit SDR data to the ML probe 410. In an embodiment, the SDRs refers to structured data records containing information about various network elements, transactions, events, or activities within the network 105.
[0066] At step 510, the ML probe 410 receives one or more policies via the user interface 215. The one or more policies are defined by the user via the user interface 215. The one or more pre-defined policies are related to at least one of, segregation, filtering, enrichment, ingestion, filtering and aggregation of the data pertaining to the one or more network functions 405.
[0067] At step 515, upon receiving SDR data from the one or more NFs 405 and policies defined by the user via the user interface 215, the ML probe 410 is configured to perform operations on the SDR data received from the one or more network functions 405 based on the policies defined by the user via the user interface 215. The operations may include but not limited to, segregation and enrichment of the SDR data at an edge end. Thereafter, the ML probe 410 transmits the segregated and enriched data to the conductor 420.
[0068] Referring to FIG. 6, FIG. 6 illustrates a flow diagram of the method 600 for managing data pertaining to the network function 405 at the centralized core 250, according to various embodiments of the present invention. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0069] At step 605, the method 600 includes the step of receiving data from one or more network functions 405 at the ML unit 235. The data of the one or more network functions 405 is stored in the database 220 as metadata. The data includes at least one of, a Swap Data Repositories (SDR) data, a network function rule, a network function configuration, a network instance rule, a network instance configuration and clear codes.
[0070] At step 610, the method 600 includes the step of processing utilizing the ML unit 235, the data received from the one or more network functions 405 based on one or more pre-defined policies received at the ML unit 235 the one or more pre-defined policies are associated with the data.
[0071] At step 615, the method 600 includes the step of transmitting the processed data to the centralized core 250. In one embodiment, the centralized core 250 refers to a fundamental architectural component designed to manage and orchestrate the one or more network functions 405 and services from a centralized location in the network 105.
[0072] The present invention discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 205. The processor 205 is configured to receive data from the one or more network functions 405 at the ML unit 235. The processor 205 is configured to process utilizing the ML unit 235, the data received from the one or more network functions 405 based on one or more pre-defined policies received at the ML unit 235, wherein the one or more pre-defined policies are associated with the data. Further the processor 205 is configured to transmit the processed data to the centralized core 250.
[0073] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG. 1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0074] The present disclosure incorporates technical advancement that facilitates central rule provisioning for global geography. By managing network function data at a centralized core, the invention facilitates computation completion with minimum time interval at centralized location. The invention reduces the effort of monitoring and keeping track on each computation such as but not limited to, circle, host, instance and cluster. The invention further optimizes monitoring efforts and minimizes tracking requirements for the usage of bandwidth, costs significantly and improves overall network edge nodes management efficiency.
[0075] The present invention provides various advantages, including minimized computation, network performance, simplified management, efficiency gains, edge node management. By eliminating the limited results in increased bandwidth costs, storage requirements, and the need for extensive resources to segregate and manage data from multiple super cores, the solution performs processing utilizing the machine learning techniques the swap data repositories (SDR) data received from the one or more network function based on one or more pre-defined policies received by the user, wherein the one or more pre-defined policies are associated with the data .The solution provides centralized computation efficiency, elimination of geographical and instance-specific computations, and reduced monitoring effort, optimized bandwidth usage and cost reduction, enhanced network edge management. . In one embodiment, the centralized core in the one or more networks facilitates centralized management, dynamic resource allocation, policy-driven actions, real-time optimization, scalability and flexibility, enhanced security and compliance.
[0076] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0077] Communication system- 100
[0078] Network – 105
[0079] User Equipment - 110
[0080] Server - 115
[0081] System - 120
[0082] Base station - 125
[0083] Processor - 205
[0084] Memory - 210
[0085] User interface – 215
[0086] Database - 220
[0087] Receiving unit - 225
[0088] Enabling unit 230
[0089] Machine learning unit – 235
[0090] Processing unit - 240
[0091] Transmitting unit – 245
[0092] Primary memory – 305
[0093] Primary processor – 305
[0094] Network function – 405
[0095] Machine learning (ML probe) probe – 410
[0096] Conductor – 420
[0097] Centralized core – 250
,CLAIMS:
CLAIMS:
We Claim:
1. A method (600) for managing network function data at a centralized core (250), the method comprises the steps of:
receiving (605), by one or more processors (205), data from one or more network functions at a Machine Learning (ML) unit.
processing (610), by the one or more processors (205), utilizing the ML unit, the data received from the one or more network functions based on one or more pre-defined policies received at the ML unit, wherein the one or more pre-defined policies are associated with the data; and
transmitting (615), by the one or more processors (205), the processed data to the centralized core (250).
2. The method (600) as claimed in claim 1, wherein the data of the one or more network functions is stored in a database as metadata.
3. The method (600) as claimed in claim 1, wherein the data includes at least one of, Swap Data Repositories (SDR) data, a network function rule, a network function configuration, a network instance rule, a network instance configuration and clear codes.
4. The method (600) as claimed in claim 2, wherein by storing the data in the database, the one or more processors are configured to enable centralized policy provisioned mechanism control that facilitates in monitoring and controlling the at least one network function from a single user interface independent of the location of the at least one network function.
5. The method (600) as claimed in claim 1, wherein the one or more pre-defined policies are related to at least one of, segregation, filtering, enrichment, ingestion, filtering and aggregation of the network function data.
6. The method (600) as claimed in claim 1, wherein the one or more pre-defined policies are remotely applied to the at least one network function utilizing the ML unit (235).
7. The method (600) as claimed in claim 1, wherein the processing the data includes at least one of, aggregation and segregation of the data received from the one or more network functions.
8. A system (120) for managing network function data at a centralized core (250), the system (120) comprising:
a receiving unit (225), configured to, receive, data from one or more network functions at a Machine Learning (ML) unit (235);
a processing unit (240), configured to, process utilizing the ML unit (235), the data received from the one or more network functions based on one or more pre-defined policies received at the ML unit (235), wherein the one or more pre-defined policies are associated with the data; and
a transmitting unit (245), configured to, transmit, the processed data to the centralized core (250).
9. The system (120) as claimed in claim 8, wherein the data of the one or more network functions is stored in a database as metadata.
10. The system (120) as claimed in claim 8, wherein the data includes at least one of, Swap Data Repositories (SDR) data, a network function rule, a network function configuration, a network instance rule, a network instance configuration and clear codes.
11. The system (120) as claimed in claim 9, wherein by storing the data in the database, an enabling unit is configured to enable centralized policy provisioned mechanism control that facilitates in monitoring and controlling the at least one network function from a single user interface independent of the location of the at least one network function.
12. The system (120) as claimed in claim 8, wherein the one or more pre-defined policies are related to at least one of, segregation, filtering, enrichment, ingestion, filtering and aggregation of the data.
13. The system (120) as claimed in claim 8, wherein the one or more pre-defined policies are remotely applied to the at least one network function utilizing the ML unit (235).
14. The system (120) as claimed in claim 8, wherein the processing the data includes at least one of, aggregation and segregation of the data received from the one or more network functions.
15. A User Equipment (UE), comprising:
one or more primary processors (305) communicatively coupled to one or more processors (205), the one or more primary processors (305) coupled with a memory (310), wherein said memory (310) stores instructions which when executed by the one or more primary processors (305) causes the UE (110) to:
transmit, one or more pre-defined policies pertaining to one or more network functions to the one or more processors via a Machine Learning (ML) unit (235).
wherein the one or more processors (205) is configured to perform the steps as claimed in claim 1.
| # | Name | Date |
|---|---|---|
| 1 | 202321047878-STATEMENT OF UNDERTAKING (FORM 3) [16-07-2023(online)].pdf | 2023-07-16 |
| 2 | 202321047878-PROVISIONAL SPECIFICATION [16-07-2023(online)].pdf | 2023-07-16 |
| 3 | 202321047878-FORM 1 [16-07-2023(online)].pdf | 2023-07-16 |
| 4 | 202321047878-FIGURE OF ABSTRACT [16-07-2023(online)].pdf | 2023-07-16 |
| 5 | 202321047878-DRAWINGS [16-07-2023(online)].pdf | 2023-07-16 |
| 6 | 202321047878-DECLARATION OF INVENTORSHIP (FORM 5) [16-07-2023(online)].pdf | 2023-07-16 |
| 7 | 202321047878-FORM-26 [03-10-2023(online)].pdf | 2023-10-03 |
| 8 | 202321047878-Proof of Right [08-01-2024(online)].pdf | 2024-01-08 |
| 9 | 202321047878-DRAWING [16-07-2024(online)].pdf | 2024-07-16 |
| 10 | 202321047878-COMPLETE SPECIFICATION [16-07-2024(online)].pdf | 2024-07-16 |
| 11 | Abstract-1.jpg | 2024-09-03 |
| 12 | 202321047878-Power of Attorney [25-10-2024(online)].pdf | 2024-10-25 |
| 13 | 202321047878-Form 1 (Submitted on date of filing) [25-10-2024(online)].pdf | 2024-10-25 |
| 14 | 202321047878-Covering Letter [25-10-2024(online)].pdf | 2024-10-25 |
| 15 | 202321047878-CERTIFIED COPIES TRANSMISSION TO IB [25-10-2024(online)].pdf | 2024-10-25 |
| 16 | 202321047878-FORM 3 [03-12-2024(online)].pdf | 2024-12-03 |
| 17 | 202321047878-FORM 18 [20-03-2025(online)].pdf | 2025-03-20 |