Sign In to Follow Application
View All Documents & Correspondence

System And Method For Managing Network Slicing

Abstract: ABSTRACT SYSTEM AND METHOD FOR MANAGING NETWORK SLICING The present invention relates to a system (108) and a method (600) for managing network slicing in a network environment. The disclosed system (108) and method (600) are designed to optimize network performance and resource allocation for service providers. More specifically, the present invention offers a novel approach for predicting and generating optimized network slicing plans by utilizing a trained model (218) that analyzes various datasets, including historical network data, usage patterns, and performance metrics. By performing trend analysis based on predefined or dynamically set parameters, the system (108) ensures that network resources are allocated efficiently, improving network performance and user experience. The invention enables service providers to implement data-driven slicing configurations that adapt to fluctuating network demands. [Refer Fig. 1]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
11 October 2023
Publication Number
16/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Ankit Murarka
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Jugal Kishore
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Chandra Ganveer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Sanjana Chaudhary
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
6. Gourav Gurbani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
7. Yogesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
8. Avinash Kushwaha
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
9. Dharmendra Kumar Vishwakarma
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
10. Sajal Soni
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
11. Niharika Patnam
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
12. Shubham Ingle
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
13. Harsh Poddar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
14. Sanket Kumthekar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
15. Mohit Bhanwria
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
16. Shashank Bhushan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
17. Vinay Gayki
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
18. Aniket Khade
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
19. Durgesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
20. Zenith Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
21. Gaurav Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
22. Manasvi Rajani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
23. Kishan Sahu
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
24. Sunil meena
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
25. Supriya Kaushik De
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
26. Kumar Debashish
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
27. Mehul Tilala
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
28. Satish Narayan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
29. Rahul Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
30. Harshita Garg
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
31. Kunal Telgote
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
32. Ralph Lobo
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
33. Girish Dange
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR MANAGING NETWORK SLICING
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3. PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates generally to network slicing, and in particular the present invention provides a system and method for managing network slicing in a communication network.
BACKGROUND OF THE INVENTION
[0002] In the cellular world, network slicing allows businesses to control traffic resources on a more granular level. Network slicing enables the multiplexing of virtualized and independent logical networks on the same physical network infrastructure. This technology is particularly crucial in the era of 5G and beyond, as it allows for the creation of tailored network services that can meet the diverse requirements of different applications and use cases.

[0003] In general, network slicing plans curated by service providers often fall short in satisfying the customer's requirements, where the customer may be an application developer. This leads to affecting the experience of the customers. Due to the generic slicing plans, service providers may be unable to provide or offer highly customized network solutions to the customers. These network slicing plans may be manually planned by the service providers, requiring high operational costs and potentially limiting the flexibility and responsiveness of the network to changing demands.

[0004] In view of the above, there is a dire need for a system and method for managing network slicing, which ensures better experience and satisfying demands of the customers. Such a system would need to be more dynamic, automated, and capable of adapting to the specific needs of various applications and services in real time. To address these challenges, advanced network slicing management systems are being developed that leverage artificial intelligence and machine learning technologies. These systems aim to automate the process of creating, modifying, and optimizing network slices based on real-time data and predictive analytics. By analyzing patterns in network usage, application requirements, and user behavior, these AI-driven systems can proactively adjust network resources to ensure optimal performance and efficiency.

[0005] Furthermore, the integration of edge computing with network slicing is opening up new possibilities for ultra-low latency applications and services. By bringing computing resources closer to the end-users and combining this with tailored network slices, service providers can offer unprecedented levels of performance and reliability for critical applications such as autonomous vehicles, industrial IoT, and augmented reality experiences.

[0006] Lastly, the evolution of network slicing management is also focusing on improving the orchestration and lifecycle management of network slices. This includes developing more sophisticated APIs and interfaces that allow for greater interoperability between different network components and service layers. By enhancing the flexibility and programmability of network slices, service providers can create more agile and responsive network environments that can quickly adapt to changing market demands and technological innovations.

[0007] The present invention aims to address critical challenges in network resource management by providing a sophisticated, data-driven approach to optimizing network slicing. By doing so, it seeks to transform how service providers manage network performance and resource allocation, ensuring a more efficient, accurate, and adaptable approach to handling network demands in real-time.
SUMMARY OF THE INVENTION
[0008] One or more embodiments of the present disclosure provide a system and a method for managing network slicing in a network.
[0009] In one aspect of the present invention, the method of managing network slicing in the network is disclosed. The method includes the step of retrieving, by one or more processors, multiple types of data from one or more sources. The method further includes the step of preprocessing, by the one or more processors, the multiple types of retrieved data. The method further comprises the step of feeding, by the one or more processors, the preprocessed data to a model for training. The method further includes the step of analyzing, by the one or more processors, utilizing the trained model, the pre-processed data to predict new network slicing plans. The method further includes the step of generating, by the one or more processors, a visual representation of the predicted new network slicing plans based on the analysis.
[0010] In one embodiment, the data pertaining to the multiple types of data include at least one of, customers onboarding data, customers deactivation data, customer historical data, historical network slice data and service-based data/logs.
[0011] In another embodiment, the one or more processors retrieves the multiple types of data from the one or more sources in real time or non-real time, wherein the non-real time represents retrieving the multiple types of data which is stored in the one or more sources.
[0012] In another embodiment, the multiple types of data are retrieved from the one or more sources based on receiving a request from at least one of, a user or an entity, wherein the entity includes at least one of, a network component, application or a microservice.
[0013] In yet another embodiment, the preprocessing of the retrieved data includes at least one of, normalizing the retrieved data and cleaning the retrieved data
[0014] In yet another embodiment, the the pre-processed data is stored in a storage unit.
[0015] In yet another embodiment, the step of analyzing, by the one or more processors, utilizing the trained model, the retrieved data to predict new network slicing plans, includes the steps of performing, by the one or more processors, utilizing the trained model, a trend/pattern analysis related to one or more parameters of the plurality of customers and predicting, by the one or more processors, utilizing the trained model, the new network slicing plans based on the trend/pattern analysis.
[0016] In yet another embodiment, the one or more parameters pertaining to the plurality of customers includes at least one of, historical data pertaining to the historical network slice data and the current network slice data.
[0017] In yet another embodiment, the step of, predicting, utilizing the trained model, the new network slicing plans based on the trend/pattern analysis, further includes the steps of enabling, by the one or more processors, one or more processors allows a user to interact with a Graphical User Interface (GUI) running on a User Equipment (UE) and allowing, by the one or more processors, the user to customize the predicted new network slicing plans based on the interaction of the user with the GUI using one or more tools.
[0018] In yet another embodiment, the predicted new network slicing plans are stored in the storage unit.
[0019] In yet another embodiment, the step of generating, by the one or more processors, a visual representation of the predicted new network slicing plans based on the analysis, further includes the step of displaying, by the one or more processors, the generated visual representation of the predicted new network slicing plans to the user on the GUI of the UE.
[0020] In yet another embodiment, the generated new network slicing plans are displayed to the user in at least one of, a report, a graphical representation, and a pictorial representation.
[0021] In yet another embodiment, a result of the predicted new network slicing plans is transmitted to at least one of, the user or the entity, wherein the entity includes at least one of, the network component, the application or the microservice.
[0022] In another aspect of the present invention, a system managing network slicing in a network is disclosed. The system includes a retrieving unit, configured to, retrieve, multiple types of data from one or more sources. The system further includes a preprocessing unit, configured to, preprocess, the multiple types of retrieved data. The system further includes a feeding unit, configured to, feed, the retrieved data to a model for training. The system further includes an analysis unit, configured to, analyse, utilizing the trained model, the pre-processed data to predict new network slicing plans. The system further includes a generating unit, configured to, generate, a visual representation of the identified plurality of customers for migration based on the analysis.
[0023] In another embodiment, the retrieving unit retrieves the multiple types of data from one or more sources in real time or non-real time, wherein the non-real time represents retrieving the multiple types of data which is stored in the one or more sources.
[0024] In another embodiment, the analysis unit is configured to analyze utilizing the trained model, the retrieved data to predict new network slicing plans, by performing, utilizing the trained model, a trend/pattern analysis related to one or more parameters of the plurality of customers and predicting, utilizing the trained model, the new network slicing plans based on the trend/pattern analysis.
[0025] In another embodiment, the analysis unit is further configured to enable a user to interact with a Graphical User Interface (GUI) running on a User Equipment (UE) allow, the user to customize the predicted new network slicing plans based on the interaction of the user with the GUI using one or more tools.
[0026] In another embodiment, the generation unit is configured to generate a visual representation of the predicted new network slicing plans based on the analysis, and further configured to display, the generated visual representation of the predicted new network slicing plans to the user on the GUI of the UE.
[0027] In another embodiment, the generated new network slicing plans are displayed to the user in at least one of, a report, a graphical representation, and a pictorial representation.
[0028] In another aspect of the present invention, a User Equipment (UE) is disclosed. One or more primary processors are communicatively coupled to one or more processors. The one or more primary processors coupled with a memory. The memory stores instructions which when executed by the one or more primary processors cause the UE to transmit a request by the user to the one or more processors. The one or more processors are configured to perform the steps of predicting the new network slicing plans.
[0029] In yet another aspect of the present invention, a non-transitory computer-readable medium is provided having stored thereon computer-readable instructions that, when executed by a processor, cause the processor to retrieve multiple types of data from one or more sources. The processor is configured to preprocess, the multiple types of retrieved data. The processor is configured to feed, the retrieved data to a model for training. The processor is configured to, analyze, utilizing the trained model, the pre-processed data to predict new network slicing plans. Based on the analysis, the processor generates a visual representation of the predicted new network slicing plans and automatically executes the step of displaying the visual representation to the user.
[0030] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0032] FIG. 1 is an exemplary block diagram of an environment for managing network slicing in a network, according to one or more embodiments of the present invention;
[0033] FIG. 2 is an exemplary block diagram of the system for managing network slicing in the network, according to one or more embodiments of the present invention;
[0034] FIG. 3 is an exemplary block diagram of the system of FIG. 2, according to one or more embodiments of the present invention;
[0035] FIG. 4 is an exemplary architecture for the system managing network slicing in the network, according to one or more embodiments of the present disclosure;
[0036] FIG. 5 is a signal flow diagram illustrating the flow of the system for managing network slicing in the network, according to one or more embodiments of the present invention; and
[0037] FIG. 6 is a flow diagram of the method for managing network slicing in the network, according to one or more embodiments of the present invention.
[0038] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0039] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0040] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0041] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0042] Various embodiments of the present invention provide a system and method for managing network slicing in a telecommunications network. The disclosed system and method enhance the customer experience by predicting and identifying trends and requirements for network slicing through an Artificial Intelligence/Machine Learning (AI/ML) model. Historical data and trend analysis are utilized by the trained model to predict optimal network slice configurations. The system ensures that application developers and other customers can experience improved network performance by dynamically adapting to their specific needs and implementing new network slices accordingly, thus facilitating efficient and intelligent network management.
[0043] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 a system 108 for managing network slicing in a network 106, according to one or more embodiments of the present invention. The environment 100 includes a User Equipment (UE) 102, a server 104, a communication network 106, a system 108, a data source 110, and a storage unit 112. The UE 102 aids a user to interact with the system 108 by transmitting a request in order to manage network slicing in the network 106.
[0044] The prediction of new network slicing plans is determined when the current network slices no longer meet the performance needs of customers or when trends indicate that a new slice configuration would enhance overall network efficiency. The determination of network slicing plans is influenced by various factors, such as historical network slice performance, customer demand patterns, and evolving network usage. Data from sources like customer profiles, historical and current network slice data, and performance metrics are analyzed to predict new slicing plans. Customers experiencing inconsistent performance, increased data usage, or requiring more tailored network services may trigger the need for new slicing configurations. Failing to identify optimal slicing opportunities can lead to reduced network performance, customer dissatisfaction, and missed opportunities for service enhancement. Accurately predicting and implementing new network slicing plans is critical to maintaining high-quality service, improving customer satisfaction, and ensuring efficient resource allocation in the network.
[0045] For the purpose of description and explanation, the description will be /explained with respect to one or more user equipment’s (UEs) 102, or to be more specific, will be explained with respect to a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the at least one UE 102 namely the first UE 102a, the second UE 102b, and the third UE 102c is configured to connect to the server 104 via the communication network 106. Each of at least one UE 102 pertains to the user requesting to generate optimized network slicing plans within the network, based on data-driven analysis and predictions from a trained model 218 (as shown in FIG.2).
[0046] In an embodiment, each of the first UE 102a, the second UE 102b, and the third UE 102c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0047] The communication network 106 involved in the management of network slicing may include, but is not limited to, one or more of a Long-Term Evolution (LTE) network, a Fifth Generation (5G) network, or a combination of legacy networks such as the Global System for Mobile Communications (GSM) and Universal Mobile Telecommunications System (UMTS). The network slicing approach outlined in the present invention can be applied to various advanced networks, including next-generation networks like Sixth Generation (6G) or New Radio (NR) networks. These networks operate across a range of infrastructure types, such as terrestrial cellular networks, satellite networks, fiber-optic networks, and fixed wireless networks. Additionally, the communication network supports a variety of communication protocols and standards, including but not limited to, enhanced Mobile Broadband (eMBB), Ultra-Reliable Low-Latency Communications (URLLC), and massive Machine-Type Communications (mMTC). The present invention ensures that network slices are optimized based on these diverse network types and communication standards, enhancing both network performance and customer experience.
[0048] The environment 100 includes the server 104 that is accessible via the network 106 where the network slicing is managed. The server 104 may include, by way of example but is not limited to, one or more of a standalone server, a server cluster, a server blade, a server rack, a data center, hardware supporting part of a cloud-based system, hardware running virtualized servers, or any combination thereof. In an embodiment, the server 104 could be associated with various entities, such as a network operator, service provider, cloud platform provider, a telecommunications company, a research organization, or any enterprise utilizing network slicing for optimizing their network resources and improving service delivery. The server 104 is responsible for executing the processes related to data retrieval, model training, analysis, and visual representation of network slicing plans as described herein.
[0049] The environment 100 includes the storage unit 114 communicably coupled to the server 104 via the network 106 used for managing network slicing. The storage unit 114 is an electronic device integrated within the network 106, capable of storing, receiving, and transmitting data related to network slicing. The storage unit 114 may consist of various types of data storage devices, such as a network-attached storage (NAS) system, cloud-based storage, or local data servers. It can also include equipment like routers, data switches, or modems for facilitating data transmission. The storage unit 114 plays a crucial role in maintaining the pre-processed data, historical network slice data, and predicted network slicing plans. This setup ensures efficient real-time data storage, processing, and synchronization, supporting the seamless operation of network slicing and enabling consistent updates across connected systems within the network 106.
[0050] The environment 100 features the data sources 110, which serve as a critical component for the network slicing management process. The network slicing is a method used in network architecture, that allows multiple virtual networks to be created on a single physical network infrastructure. Each network slice is a separate end-to-end logical network tailored to meet specific requirements and use cases, providing dedicated resources and functionalities. For example, a telecommunications provider can utilize network slicing to meet the diverse needs of various applications such as providing Enhanced Mobile Broadband (eMBB) Slice for High-definition video streaming for public events. Due to the eMBB slice the users attending a concert can stream live video without interruptions. In one embodiment, the network slicing management refers to the processes and tools used to oversee, orchestrate, and optimize network slices within the network 106. This management is essential for ensuring that each slice operates efficiently and meets the specific requirements of various applications and users. For example, the telecommunications provider defines each slice's specifications and allocates the necessary network resources such as at least one of, but not limited to, bandwidth and power. For instance, such as the eMBB slice, the telecommunications provider configures the eMBB slice to handle high data rates.
[0051] In one embodiment, the data sources 110 are electronic repositories designed to store a wide range of relevant data required for optimizing network slicing. The data sources 110 include, but are not limited to: customer relationship management (CRM) systems containing customer onboarding and deactivation data; historical network slice performance databases storing metrics on network utilization and efficiency; network monitoring systems providing real-time data on current network slice configurations; usage analytics platforms capturing customer demand and service patterns; geographical information systems (GIS) offering insights into network coverage and customer locations; market analysis databases tracking customer preferences and service requirements; technical infrastructure databases detailing hardware and software capabilities for network slicing; regulatory compliance databases ensuring adherence to network service standards; internal business intelligence systems aggregating key performance indicators (KPIs) for network optimization and service-based data/logs.
[0052] In one embodiment, the service-based data/logs refer to the information generated by applications and services that monitor, record, and report on their operations, performance, and interactions. For example, the service-based data/logs provide historical performance metrics and usage patterns, allowing network operators to analyze trends over time. Analyzing service-based logs allows for better understanding of current resource utilization which enables dynamic allocation of resources to different network slices based on real-time demand, ensuring that each slice receives the appropriate bandwidth and processing power. In one embodiment, the service-based data/logs are invaluable for predicting future needs and optimizing network slicing plans. The service-based data/logs provide the necessary insights and metrics that enable organizations to respond dynamically to changing conditions, ensuring efficient resource allocation and enhanced service quality. These data sources 110 collectively provide the comprehensive dataset necessary for the server 104 to process and analyze, enabling accurate predictions and the generation of optimized network slicing plans.
[0053] The environment 100 further includes the system 108, communicably coupled to the server 104, the storage unit 114, and other network components via the network 106. The system 108 is designed to either be integrated within the server 104 or function as a standalone entity. It is responsible for managing the various processes related to network slicing, including data retrieval, preprocessing, model training, analysis, and the generation of visual representations for predicted network slicing plans.
[0054] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0055] FIG. 2 is an exemplary block diagram of the system 108 for managing network slicing in the network 106, according to one or more embodiments of the present invention.
[0056] As per the illustrated and preferred embodiment, the system 108 includes one or more processors 202, a memory 204, a user interface 206, a database 220 and the trained model 218. The one or more processors 202 includes a retrieving unit 208, a preprocessing unit 210, a feeding unit 212, an analysis unit 214, a generating unit 216 and a handling unit 222.
[0057] In a further embodiment, the data stored in the database 220 encompasses various categories essential for the network slicing management process. The categories include customer profile data, such as onboarding and usage history, as well as current and historical network slice performance data. The stored data also includes trend analysis and predicted network slicing plans, allowing for performance evaluation and optimization. Additionally, the database 220 maintains machine learning model parameters and criteria defined by the service provider for network slicing predictions. This diverse collection of data elements enables the system 108 to make informed, real-time decisions regarding network slicing, ensuring optimal performance and efficient resource allocation across the network.
[0058] The one or more processors 202, hereinafter referred to as the processor 202, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. However, it is to be noted that the system 108 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
[0059] In the illustrated embodiment, the processor 202 is configured to retrieve and execute computer-readable instructions stored in the memory 204, with the memory 204 being communicably connected to the processor 202. The memory 204 is designed to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which can be retrieved and executed for managing network slicing and predicting new slicing plans. The memory 204 may include any non-transitory storage unit 114, such as volatile memory like RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, and other types of unalterable memory.
[0060] In the illustrated embodiment, the database 220 is configured to store data related to customer profiles, historical network slice data, current network slice configurations, and criteria for predicting new network slicing plans. The database 220 is distinct from the storage unit 114 in both its primary function and the nature of the data it holds. While the storage unit 114 may contain temporary data, system logs, and pre-processed information, the database 220 is designed for structured, queryable data essential for network slicing management. The database 220 can be implemented as one of, but not limited to, a relational database, a cloud-based database, a commercial database, an open-source database, a distributed database, a time series database, a wide column database, a NoSQL database, an object-oriented database, an in-memory database, or any combination thereof. It is optimized for real-time data retrieval and updates, supporting the dynamic nature of network slicing predictions and analysis. The database 220 utilizes advanced features such as data partitioning, indexing, and caching to ensure rapid access to frequently queried information. The aforementioned examples of database 220 types are non-limiting and may be used in combination, such as a distributed and in-memory database or a relational and cloud-based database, depending on the specific requirements of the system 108 and the scale of network slicing operations being managed.
[0061] In an embodiment, the storage unit 114, as described in the present invention, performs several critical functions that complement the database 220. The said functions include high-speed temporary data storage, where the storage unit 114 holds intermediate results such as partial computations and temporary datasets during the network slicing management process. The storage unit 114 also maintains detailed system logs and audit trails, recording data access patterns, network slicing decisions, and performance metrics essential for auditing and system optimization. Additionally, the storage unit 114 functions as a caching layer to improve system performance by storing frequently accessed data from the database 220. It also stores machine learning model parameters and configuration data, which are essential for real-time processing, trend analysis, and the enforcement of network slicing rules.
[0062] Furthermore, the storage unit 114 plays a crucial role in backup and recovery operations, providing temporary backup for critical data to ensure swift recovery from system failures or data corruption within the primary database 220. The storage unit 114 also handles data preprocessing tasks such as normalization, data cleaning, and feature extraction, which are essential before the data is fed into the model for predicting new network slicing plans. For systems handling high volumes of network slicing predictions and real-time processing requests, the storage unit 114 incorporates queue management structures to facilitate orderly and efficient processing of tasks. Additionally, the storage unit 114 supports caching and buffering mechanisms to improve system response times and manage computational loads during peak operations. It is configured to handle the dynamic and resource-intensive requirements of network slicing management, utilizing technologies like solid-state drives (SSDs), RAM-based storage, or distributed in-memory data grids tailored to specific performance needs. This ensures optimized performance, scalability, and resilience in network slicing operations.
[0063] In an embodiment, the system 108 initiates the network slicing prediction process by retrieving relevant data, such as customer onboarding data, historical network slice data, and current network slice performance metrics, from the database 220 upon receiving a request transmitted by the user via the UE 102. In an alternate embodiment, the system 108 autonomously predicts new network slicing plans without requiring a user-initiated request. This is accomplished by utilizing real-time data analytics and predefined criteria for network performance, customer usage patterns, and historical trends stored within the database 220. The system utilizes the model to analyze the data continuously and generate optimal network slicing strategies. This autonomous approach streamlines the network management process, enhances operational efficiency, and ensures customers receive the best possible network performance without manual intervention.
[0064] In an embodiment, the one or more processors retrieve multiple types of data from the one or more sources either in real time or non-real time. In the case of real-time data retrieval, the one or more processors actively gather data as it is generated or transmitted from various sources, ensuring the system has access to the most up-to-date information. On the other hand, non-real-time retrieval refers to the process where the processors access pre-existing, stored data from one or more sources. This stored data may include historical records, archived logs, or previously gathered data that is not updated in real time. The flexibility of retrieving both real-time and non-real-time data allows the system to utilize a comprehensive range of information for accurate analysis, enhancing the model's ability to make informed decisions based on both current and past network or user behavior.
[0065] In an embodiment, the retrieving unit 208 is configured to extract one or more data sets related to network slicing and customer usage from multiple sources. The data sets include, but are not limited to, customer onboarding data, deactivation data, historical network slice data, current network slice performance, device compatibility status and service-based data/logs. In one embodiment, the customer onboarding data refers to the information collected and processed during the onboarding process of new customers. In one embodiment, the deactivation data refers to the information collected and processed when the customer decides to terminate or deactivate their account or subscription with a service or product. In one embodiment, the historical network slice data refers to the recorded information about the performance, usage, and configurations of network slices over time. In one embodiment, the current network slice performance refers to the real-time metrics and indicators that reflect how well a specific network slice is operating at any given moment. In one embodiment, the device compatibility status refers to the assessment of whether a particular device can effectively operate with a specific application, software, or network services. In one embodiment, the retrieval of one or more data sets sources is essential for effective network slicing planning. By leveraging the insights related to the retrieved one or more data sets, the organizations can predict new network slicing plans and can create a more responsive, efficient, and customer-focused network that meets diverse service demands and enhances overall user satisfaction. To facilitate the extraction of these data sets, the retrieving unit 208 interfaces with external databases, APIs, and real-time monitoring systems, ensuring the availability of up-to-date and relevant information. This step enables the system 108 to continuously collect and analyze data necessary for predicting new network slicing plans and improving network performance for end users.
[0066] In the context of the present invention, predicting new network slicing plans refers to evaluating specific criteria that determine the optimal allocation of network slices based on customer usage patterns and network performance metrics. The criteria serve as foundational guidelines that the analysis unit 212 uses to assess historical and current network data. In an embodiment, the analysis unit 212 employs the trained model 218 to perform this evaluation in real-time, ensuring timely and dynamic adjustments to network slicing plans.
[0067] In an embodiment, the trained model 218 is designed to execute multiple advanced algorithms that serve distinct functions, including prediction, anomaly detection, and the generation of outputs through large language models (LLMs). The model leverages ML techniques to analyze both network data and operational data to deliver a comprehensive and intelligent analysis. The trained model 218 uses predictive algorithms to forecast future network behaviors and trends. Based on historical and real-time network data, such as traffic patterns, user demand, and performance metrics, the trained model 218 can predict events like network congestion, resource depletion, or peak usage times. The trained model 218 also incorporates anomaly detection algorithms to identify deviations or irregularities in the network’s 106 behavior. By continuously monitoring network and operational data, the trained model 218 can flag unusual patterns, such as performance degradation, security threats, or hardware failures, prompting timely interventions to prevent service disruptions or system failures. Additionally, the trained model 218 utilizes LLMs to generate outputs that assist with decision-making, documentation, or automated responses. This generative AI capability can synthesize complex information into human-readable formats, provide actionable recommendations for network management, or even generate scripts for automating network configurations based on the analyzed data.
[0068] In another embodiment, the trained model 218 is developed using a combination of historical network slice data, customer activity data, and current network performance metrics sourced from the retrieving unit 208. The identified criteria include, but are not limited to, factors such as customer demand patterns, current network load, service plan alignment, device compatibility, and network coverage, which collectively inform the process of predicting and optimizing new network slicing plans.
[0069] In an embodiment, the retrieved data is subsequently pre-processed by the one or more processors 202, which involves multiple stages to prepare the data for optimal use in subsequent analysis by the model 218. The pre-processing unit 210 is responsible for cleaning, normalizing, formatting, and organizing the data. This ensures consistency across diverse data sources, such as customer usage patterns, network performance metrics, and historical network slicing data. In the cleaning phase, the pre-processing unit 210 eliminates irrelevant, duplicate, or corrupt entries from the data sets, thereby reducing noise and improving the accuracy of predictions. During the normalization process, data is transformed into a consistent format—such as standardizing units or scaling numerical values—allowing the model 218 to process it efficiently. The pre-processing unit 210 further organizes the data into structured formats, facilitating smooth data ingestion by the analysis unit 212. Additionally, feature extraction may be performed, wherein the pre-processing unit 210 identifies and highlights relevant parameters, such as customer demand trends, network congestion metrics, and service plan details, which are critical for predicting new network slicing plans. These pre-processing steps enhance the speed, accuracy, and reliability of the system’s analysis, ensuring the real-time adaptation of network slicing plans to meet evolving customer and network demands.
[0070] In an embodiment of the present invention, the retrieving unit 208 may retrieve various types of data to facilitate network slicing management for a telecommunications provider. For instance, the system 108 gathers customer device specifications, service subscription details, and real-time network performance metrics from multiple sources, including the provider’s internal databases and third-party analytics platforms. Once this data is retrieved, the preprocessing unit 210 processes it to eliminate inconsistencies, normalize data formats, and categorize the information for effective analysis. The cleaned and structured data is then stored in the storage unit 114, ensuring it is readily accessible for subsequent analysis aimed at predicting optimal network slicing plans based on current demand and resource availability.
[0071] In an embodiment, the feeding unit 212 is configured to feed the pre-processed data related to network slicing management to the trained model 218 for further analysis. The feeding unit 212 interacts with the preprocessing unit 210 to receive structured data, including customer device specifications, service plan details, and network performance metrics. The feeding unit 212 ensures that this data is formatted according to the trained model 218 input requirements, optimizing the model's analytical capabilities. The said feeding process is dynamic, enabling the system 108 to adapt to changing data inputs and ensuring that the trained model 218 continuously improves as new data is retrieved and fed into the system 108.
[0072] Additionally, the feeding unit 212 is responsible for managing the data flow into the machine learning model to prevent data overload or bottlenecks. In one embodiment, the feeding unit 212 operates in real-time, supplying data to the trained model 218 immediately after it has been pre-processed. This step allows the system 108 to make timely predictions for new network slicing plans based on current demand and resource availability. The feeding unit 212 also organizes the data into relevant subsets based on specific criteria, such as customer profiles, device compatibility, or network performance, allowing the trained model 218 to focus on critical factors during its analysis, ultimately enhancing the efficiency and effectiveness of network slicing management.
[0073] For example, in a scenario where a telecommunications provider aims to optimize network slicing strategies for improved resource allocation, the feeding unit 212 retrieves and supplies pertinent data to the trained model 218. The data encompasses current network usage statistics, customer device capabilities, and service plan specifics. By organizing this information in a structured format, the feeding unit 212 enables the trained model 218 to effectively analyze the data and predict optimal network slicing configurations. The trained model 218 then processes the input and generates recommendations for dynamic slicing plans, which can be implemented by the provider to enhance network performance and user experience across varying service levels.
[0074] In an embodiment, the one or more processors 202 are configured to enable a user to interact with a Graphical User Interface (GUI) running on a User Equipment (UE). This interface serves as a visual platform that allows the user to view the predicted network slicing plans generated by the trained model 218. The GUI may present various visualization tools such as graphs, charts, or dashboards, which provide insights into network performance, user demand, and resource allocation. By allowing users to interact with the interface, the system 108 facilitates seamless access to critical data, enabling them to make informed decisions regarding network slicing. Through the GUI, the one or more processors 202 allow the user to customize the predicted network slicing plans. The system 108 provides a range of interactive tools within the GUI that users can utilize to modify or fine-tune the slicing plans based on specific requirements or preferences. For example, users may adjust parameters such as bandwidth allocation, user prioritization, or service levels to meet specific operational needs. The customization capability enhances the flexibility of the network management process, allowing network operators to tailor the slicing plans to suit evolving network conditions or business goals.
[0075] In an embodiment, the analysis unit 214 utilizes the trained model 218 to perform data analysis for generating optimal network slicing plans. Upon receipt of pre-processed data from the feeding unit 212, the analysis unit 214 extracts relevant patterns and insights essential for effective network slicing management. It evaluates parameters including network utilization, device capabilities, service requirements, and performance metrics to create precise slicing profiles. Through the application of machine learning algorithms, the analysis unit 214 identifies critical correlations between the data inputs and slicing criteria, thereby enhancing the accuracy and efficiency of network slicing management.
[0076] In one embodiment, the system 108 executes an analytical process whereby the pre-processed data, subsequent to retrieval from the storage unit 114, is subjected to predictive analysis utilizing the trained model 218. Said predictive analysis comprises evaluating the pre-processed data to forecast network slicing requirements and resource allocation patterns. The system 108 is configured to execute both retrospective analysis of historical network data and concurrent analysis of real-time network parameters, thereby enabling dynamic updating of network slicing predictions as new data is received and processed. The analytical process further comprises generating, based on the predictive analysis, one or more network slicing plans optimized for anticipated network conditions and resource requirements. Such predictive capability facilitates proactive network resource allocation, wherein network slicing parameters are adjusted in advance of predicted changes in network utilization patterns. The system 108 is further configured to transmit, to the UE 102, a visual representation of the generated network slicing plans, said visual representation comprising data indicative of the predicted resource allocation patterns and associated network performance metrics.
[0077] For instance, in a scenario where a telecommunications provider anticipates increased demand for bandwidth during a major sporting event, the analysis unit 214 may analyze historical network usage data from similar past events along with real-time data on current user activity. By applying the trained model 218, the analysis unit 214 predicts specific network slicing requirements that will be necessary to accommodate the surge in users and traffic. The predictive analysis allows the provider to dynamically adjust network resources in advance, ensuring sufficient capacity and optimal performance during the event. Consequently, the network provider can preemptively allocate resources and implement slicing strategies that enhance user experience and maintain service quality during peak usage times.
[0078] In an embodiment, the generating unit 216 is configured to create a visual representation of the predicted network slicing plans based on the analysis results from the analysis unit 214. Upon receiving the insights generated from the analysis, the generating unit 216 processes this information to produce comprehensive visual outputs that effectively communicate the proposed network slicing strategies. The visual representation may include graphical elements such as flowcharts, network diagrams, or dashboards that highlight key metrics, performance indicators, and expected resource allocations pertaining to the new network slicing plans.
[0079] The generating unit 216 further customizes the visual representation based on predefined parameters or user specifications, ensuring that the output is tailored to the operational needs of the network provider. By incorporating various data visualizations, the generating unit 216 enhances the clarity and interpretability of the analysis results, allowing decision-makers to swiftly assess the implications of the proposed slicing plans and prioritize implementation strategies. The visual output is designed to be intuitive and user-friendly, empowering stakeholders to derive actionable insights and facilitate effective network resource management efficiently.
[0080] For example, in a scenario where a telecommunications provider aims to optimize its network slicing strategy for a new 5G rollout, the generating unit 216 processes the analysis results from the analysis unit 214, which includes various factors such as customer demand patterns, expected traffic loads, and device compatibility. The generating unit 216 creates a visual representation in the form of an interactive dashboard that displays multiple network slicing plans, each tailored to different customer segments. The dashboard includes graphical elements like pie charts indicating the percentage of resources allocated to each slice, line graphs depicting anticipated network performance under varying loads, and color-coded maps illustrating geographic coverage for each slicing plan. By utilizing this visual output, decision-makers can quickly assess which network-slicing plans will best meet customer demands and network efficiency goals. Additionally, the dashboard allows for real-time adjustments based on changing data inputs, enabling the provider to make informed, strategic decisions regarding the deployment and management of its network slicing initiatives.
[0081] In an embodiment, the result of the predicted new network slicing plans is transmitted to at least one of the users or the entity. The user refers to a network operator or administrator responsible for managing the slicing configurations. Upon generation of the slicing plans, the one or more processors 202 transmit the result to the user for review, customization, or approval before implementation. The entity includes at least one of a network component, application, or microservice. The network component may represent hardware or software elements of the network, such as routers or switches, which execute the slicing configurations. The application refers to software services that require specific slicing parameters to ensure optimal performance, while the microservice refers to individual service units in a distributed architecture that may benefit from tailored resource allocation.
[0082] In an alternate embodiment, the generating unit 216 is configured to generate a notification regarding the predicted new network slicing plans to notify at least one one of, the service, the microservice, the application and the. For example, the generating unit 216 is configured to notify the predicted new network slicing plans by transmitting an acknowledgment to one of, the service, the microservice, the application using a handling unit 222. The handling unit 222 is configured to keep a record of mappings of interaction of the entities (such as the service, microservice, application, component) with the system 108. Mappings of the interaction of the entities with the system 108 pertains to at least one of, the entities transmitting commands and/or requests to the system 108 to predict new network slicing plans. Based on the mapping, the handling unit 222 informs the generating unit 216 to which entity the acknowledgment has to be transmitted pertaining to the predicted new network slicing plans. For example, let us consider that the microservice 1 had transmitted the command at the outset to predict new network slicing plans, the handling unit 222 keeps a track of this event. Basis which, the handling unit 222 informs the generating unit 216 to transmit the acknowledgment (response) to the microservice 1 pertaining to the predicted new network slicing plans.
[0083] The present invention offers several key technical advantages including automated network slicing optimization using AI/ML algorithms, which enhances resource allocation accuracy and reduces manual planning intervention. The system 108 facilitates real-time processing of multiple data types including network performance metrics, user equipment capabilities, and historical usage patterns, enabling dynamic network slice configuration. The system 108 generates customizable visual representations for efficient interpretation of network slicing plans, while maintaining scalability to support extensive network data processing across multiple network segments. Additionally, the predictive analysis capabilities enhance proactive network management strategies, and improved slice allocation optimizes network resource efficiency. The architecture of the system 108 supports continuous model refinement through iterative learning from actual network performance data, and adaptable parameter configuration to meet evolving network requirements and user demands. The present invention further enables service providers to transition from generic, manually planned network slicing to automated, data-driven network slice management, thereby significantly reducing operational costs while improving customer experience through tailored network solutions.
[0084] FIG. 3 illustrates an exemplary block diagram of the system 108, according to one or more embodiments of the present invention. More specifically, FIG. 3 illustrates the system 108 for managing network slicing in the communication network 106. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the UE 102 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0085] FIG. 3 illustrates the communication between the UE 102, the system 108, and the storage unit 114. In the context of the present invention, the UE 102 and the storage unit 114 utilize network protocol connections to communicate with the system 108. In an embodiment, the network protocol connection involves the establishment and management of communication between the UE 102, the system 108, and the storage unit 114 using specific protocols tailored for network slicing operations. The network protocol connection may include, but is not limited to, Session Initiation Protocol (SIP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), Hypertext Transfer Protocol Secure (HTTPS), and Simple Network Management Protocol (SNMP). These protocols facilitate the real-time allocation, monitoring, and adjustment of network slices to ensure optimal resource distribution and performance across the network.
[0086] In an embodiment, the UE 102 includes a primary processor 302, a memory 304, and a user interface 306. In alternate embodiments, the UE 102 may include more than one primary processor, 302, as per the requirement of the communication network 106. The primary processor 302, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0087] In an embodiment, the primary processor 302 is configured to fetch and execute computer-readable instructions stored in the memory 304. The memory 304 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to transmit requests for generating and optimizing network slicing plans within the network. The memory 304 may include any non-transitory storage device, such as volatile memory like RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0088] In an embodiment, the user interface 306 of the UE 102 includes various interfaces, such as a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 306 is configured to allow users to transmit requests related to managing network slicing. The UE 102, through the user interface 306, enables users to interact with the system 108, submitting requests for data processing, analysis, and visualization of network slicing configurations. These requests are transmitted to the processor 202 via the user interface 306, facilitating the efficient management and allocation of network resources through network slicing.
[0089] In one embodiment, the processor 202 is configured for managing network slicing in the network 106.
[0090] As mentioned earlier in FIG. 2, the system 108 includes the processors 202, and the memory 204, for managing the call between the user and the storage unit 114, which are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0091] Further, as mentioned earlier the processor 202 includes the retrieving unit 208, the preprocessing unit 210, the feeding unit 212, the analysis unit 214, the generating unit 216, the trained model 218 and the handling unit 222 which are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0092] FIG. 4 is an exemplary architecture of the system 108 for managing network slicing within the network, according to one or more embodiments of the present disclosure. The system 108 is designed to retrieve, process, analyze, and generate network slicing configurations, ensuring optimal resource allocation and network performance.
[0093] The architecture 400 pertains to the system 108 which includes, a User 404, a graphical user interface (GUI) login unit 406, a Fault Management System (FMS) interface 408, and a data integration unit 410. The processor 202 includes a data pre-processing module 412, an algorithm execution module 414, a data Lake 416, a workflow manager 418, a trend analysis unit 420 and a user interface 422.
[0094] In an exemplary embodiment, the processor 202 is designed to perform multiple algorithms, prediction, anomaly detection and LLM generative AI outputs, which takes up network data and operation data to perform ML analysis.
[0095] In an embodiment, the system 108 includes the user 404, which provides access to service providers for managing network slicing. The user 404 refers to employees or authorized personnel of the service provider who need to interact with the system 108. Given the critical nature of network slicing operations and data, access to the system is likely restricted to authorized personnel only. The user 404 may include network engineers, data analysts, system administrators, and managers responsible for monitoring and optimizing network performance through dynamic slicing configurations.
[0096] In another embodiment, the GUI login unit 406 is required to serve several purposes including Verifying the identity of users trying to access the system 108, determining what level of access or permissions each user has within the system 108, protecting sensitive customer and network data from unauthorized access and potentially customizing the interface or data presented based on the user's 404 role or permissions. The system 108 that is logged into might integrate with the workflow manager 418 to assign tasks or present relevant information to specific users based on their roles and the current state of various workflows.
[0097] In an embodiment, the FMS interface 408 is a sophisticated data management platform designed to streamline operations within the network slicing management system. The FMS interface 408 functions by collecting and processing a variety of data types, including network performance metrics, device compatibility information, service usage data, and system-generated logs. This multifaceted approach ensures that the system 108 maintains a comprehensive view of network activity and resource allocation, enabling better decision-making for optimizing and managing network slicing.
[0098] In an embodiment, the data integration unit 410 is a critical component of the system 108, designed to seamlessly amalgamate diverse datasets originating from multiple sources into the network slicing management process. The data integration unit 410 employs a structured methodology to ensure the coherent alignment and unification of disparate data types, such as network performance metrics, service usage patterns, device data, and system-generated logs. The integration process includes the extraction, transformation, and loading (ETL) of data, where data from various origins is retrieved, standardized, and consolidated into a unified format suitable for subsequent analysis by the processor 202 for network slicing optimization.
[0099] The functionality of the data integration unit 410 further encompasses data normalization and validation protocols, ensuring that the integrated datasets are free from discrepancies and inconsistencies. This ensures data integrity and enhances the reliability of the analyses performed by the trained model 218. By providing a consolidated view of the network slicing environment, the data integration unit 410 enables advanced analytical operations, such as trend analysis and predictive modeling. This empowers the processor 202 to efficiently predict and optimize network slicing strategies based on current and future network demands.
[00100] In an embodiment, the processor 202 includes the data preprocessing module 412. The data preprocessing module 412 is responsible for processing data retrieved from the FMS interface 408. This data preprocessing encompasses both data normalization and data cleaning. Data normalization refers to the systematic reorganization of data within a database to enable users to perform subsequent queries and analyses effectively. Data cleaning involves the identification and rectification or removal of erroneous, corrupted, improperly formatted, duplicated, or incomplete entries within a dataset. The data preprocessing module 412 operates to normalize and cleanse the data acquired from the FMS interface 408, thereby ensuring the integrity and utility of the data for further analytical processes.
[00101] The processor 202 further comprises the algorithm execution module 414, which is integral to its operational capabilities. The algorithm execution module 414 employs advanced artificial intelligence and machine learning (AI/ML) techniques to analyze the integrated datasets related to network slicing. Through this analytical process, the algorithm execution module 414 can discern intricate patterns and trends within the data, facilitating the identification of optimal network slicing strategies and the allocation of resources. This identification process utilizes both historical and real-time data, thereby enabling informed decision-making regarding network resource management and customer engagement in the context of evolving service offerings.

[00102] Additionally, the processor 402 encompasses the data lake 416, which serves as a distributed repository for storing processed data and outputs generated by the algorithms executed within the algorithm execution module 414. The data lake 416 is designed to accommodate vast volumes of data, ensuring scalability and flexibility in data management. The architecture of the data lake 416 allows for efficient storage and retrieval of diverse data types, enhancing the analytical capabilities of the AI/ML system 402.

[00103] The processor 202 further includes the workflow manager 418, which orchestrates and manages workflows within the system 108. The workflow manager 418 oversees the systematic execution of activities necessary to achieve specific objectives, ensuring that each step of the process is performed in a coordinated and efficient manner. In the workflow manager 418, the user interface 422 sends a request to workflow manager 418 which further sends it to the FMS interface 408. Additionally, the processor 202 includes a trend analysis module 420, which employs advanced analytical techniques to scrutinize both current and historical data. The trend analysis module performs trend analysis which is the process of collecting and analyzing data over time to identify patterns, trends, and changes. The trend analysis facilitates organizations to understand how certain factors evolve, which can inform decision-making, strategic network planning, and performance improvement. For example, in order to perform trend analysis, the trend analysis module 420 collects historical data on network usage, including at least one of, but not limited to, traffic patterns, user behavior, device types, performance metrics for existing network slices, such as throughput, latency, and resource utilization. Further, the trend analysis module 420 analyses the collected data to identify trends related to the existing network slices. Based on the identified trends, the trend analysis module 420 facilities system 108 for predicting the network slicing plans.

[00104] This facilitates the examination and prediction of optimal network slicing strategies and identifies customer segments that can benefit from these advancements. Furthermore, the system 108 features the user interface 422, which acts as the visualization layer for presenting analytical results to service providers. Through this user interface 422, the identified strategies and relevant customer insights, as detected by the processor 202, are displayed graphically. This graphical representation enhances the accessibility and comprehensibility of the analysis results, empowering service providers to make informed decisions based on the insights provided.

[00105] In an embodiment, the AI/ML system-FMS interface comprises the model, which is integral to the functionality of the system 108. The processor 202 predicts and identifies optimal network slicing strategies by analyzing historical data and current operational metrics. This predictive capability enhances the system's 108 ability to recognize customer segments that can benefit from advancements in network slicing technology. Advantageously, the present invention streamlines the identification process, saving time and resources by automatically detecting eligible customers for migration to enhanced service offerings.

[00106] FIG. 5 is the flow diagram illustrating the method for managing network slicing in the network, according to one or more embodiments of the present invention. At step 502, the UE 102 transmits a request to the system 108 for predicting the new network slicing plan. Herein, the message pertains to at least one of, bit not limited to, a connection request message, data request message, a service request message and an authentication message. In one embodiment, the UE 102 sends the request to the system 108 for accessing a specific service, such as network slicing planning. Based on the request provided by the user via UE 102, the system 108 initiates the method for managing network slicing in the network 106. The message includes at least one of, but not limited to UE identifier, type of the request which indicates that the request is the service request.

[00107] At step 504, the system 108 retrieves multiple types of data from the storage unit 114. At step 506, the system 108 preprocesses the retrieved data to ensure data quality and compatibility. At step 508, the preprocessed data is fed to a model for training, establishing a bidirectional data flow between the system 108 and the storage unit 114. At step 510, the system 108 analyzes the preprocessed data utilizing the trained model 218 to evaluate current network configurations and usage patterns. At step 512, based on the analysis, the system 108 predicts new network slicing plans optimized for various user requirements and network conditions. At step 514, the storage unit 112 stores the predicted new network slicing plans. At step 516, the system 108 generates a visual representation of the predicted network slicing plans and transmits it to the UE 102, enabling informed decision-making regarding network resource allocation and management.
[00108] For example, in a scenario where a telecommunications provider seeks to enhance its network performance, the UE 102 initiates the process by transmitting a message to the system 108, indicating a request for network slicing management. At step 502, the system 108 receives this message and proceeds to step 504, where it retrieves multiple types of data from the storage unit 114, including historical network usage data, current device compatibility information, and customer service plans.
[00109] At step 506, the system 108 preprocesses the retrieved data to remove any inconsistencies and ensure compatibility with the analysis requirements. Following this, at step 508, the preprocessed data is fed into the model for training, establishing a bidirectional data flow between the system 108 and the storage unit 114 to enable continuous updates.
[00110] At step 510, the system 108 analyzes the preprocessed data utilizing the trained model 218 to evaluate current network configurations and user behavior patterns. For example, consider a telecommunications company managing a 5G network that serves millions of customers with diverse service needs, such as video streaming, IoT applications, and online gaming. The trained model 218 analyzes historical data from the network, including customer usage patterns, network load, and service performance metrics. During the trend analysis, the trained model 218 detects that in a certain metropolitan area, video streaming spikes between 7 PM and 10 PM, while IoT devices require low latency but minimal bandwidth throughout the day. Based on this analysis, the trained model 218 predicts new network slicing plans that allocate additional bandwidth and resources to video streaming services during the evening hours to handle the higher demand. Simultaneously, it ensures that IoT devices continue to operate with the low-latency requirements, even during peak periods, by dedicating a separate slice with lower bandwidth but guaranteed latency. Based on this analysis, at step 512, the system 108 predicts new network slicing plans that are optimized for diverse user requirements, such as prioritizing high-bandwidth applications during peak usage times. At step 514, the system 108 transmits the predicted network slicing plans to the storage unit 112 for storing. Finally, at step 516, the system 108 generates the visual representation of these predicted network slicing plans and transmits it back to the UE 102, allowing network administrators to make informed decisions regarding resource allocation and management strategies.
[00111] FIG. 6 illustrates a flow chart of the method 600 for managing network slicing in the network, according to one or more embodiments of the present invention. The method 600 described below outlines the sequential steps involved in retrieving and processing data, training models, and predicting network slicing plans. It is purely exemplary in nature and should not be construed as limiting the scope of the present invention.
[00112] In another embodiment, the method 600 for managing network slicing in the network is disclosed. The method 600 involves multiple steps. At step 602, the method 600 includes the step of retrieving multiple types of data from one or more sources, which may include historical data, current network configurations, service usage data, and customer feedback. At step 604, the method 600 includes the step of preprocessing the multiple types of retrieved data to ensure its suitability for training the model, and the pre-processed data is stored in the storage unit 114. At step 606, the method 600 includes the step of feeding the pre-processed data into the model for training. At step 608, Once the model 216 is trained, the method 600 includes the step of analyzing the preprocessed data to predict new network slicing plans utilizing the trained model 218.

[00113] At step 610, the method 600 includes the step of generating a visual representation of the predicted new network slicing plans based on the analysis utilizing the trained model 218. The visual representation may take the form of graphs, heat maps, or dashboards that allow network operators to easily assess the proposed slicing configurations. Additionally, the network slicing analysis logs are stored for monitoring purposes. In one embodiment, these logs may be stored within the system 108 or in separate storage means, providing a record of slicing plan predictions and visualizations. The logs ensure that the system 108 can track slicing plans effectively over time and assist in future decision-making for network resource allocation and management.
[00114] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions that are executed by the one or more processors 202 to optimize network resource allocation. In the initial step, the one or more processors 202 gather real-time data from various sources, including network traffic logs, user demand metrics, and resource utilization statistics. The gathered data is then processed to configure the relevant parameters using a trained model 218. Once the parameters are established, the processors 202 input this data into the trained model 218 to commence the optimization process. By employing the trained model 218, the processors 202 analyze the data to determine the most efficient allocation of network resources based on factors such as peak usage times, service level agreements, and current network conditions. Finally, the one or more processors 202 produce a visual representation of the optimized resource allocation strategy, with the processed data stored in the storage unit 114. This methodology ensures that network resources are allocated efficiently, thereby enhancing overall network performance and user experience.
[00115] Finally, the one or more processors 202 automatically execute actions to analyze and generate the visual representation of the predicted network slicing plans based on the analysis conducted by the trained model 218. This automation is essential for enhancing operational efficiency and minimizing the necessity for manual oversight. By automating the analysis and visualization process, the system 108 ensures that optimized network slicing plans are swiftly identified and presented, allowing network operators to make informed decisions and implement changes promptly. This approach not only streamlines network management but also significantly improves overall network performance and resource utilization.
[00116] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[00117] The present invention offers significant advantages by employing the trained model 218 to automatically predict optimized network slicing plans, greatly minimizing the necessity for manual intervention. The present invention automates the data retrieval, preprocessing, analysis, and visualization stages using comprehensive datasets, thereby streamlining the workflow and ensuring the timely generation of effective slicing configurations. Moreover, the present invention facilitates continuous and automated analysis processes that enhance overall system performance by consistently monitoring network parameters for optimal slicing strategies. Additionally, the present invention supports automated network management processes that are inherently more adaptable, enabling the system 108 to efficiently accommodate varying network demands and handle large volumes of data with precision and reliability.
[00118] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS

[00119] Environment - 100;
[00120] User Equipment (UE) - 102;
[00121] Server - 104;
[00122] Communication Network- 106;
[00123] System -108;
[00124] Data Sources- 110;
[00125] Storage unit – 112;
[00126] Processor - 202;
[00127] Memory - 204;
[00128] User Interface – 206;
[00129] Retrieving unit– 208;
[00130] Preprocessing unit- 210;
[00131] Feeding unit – 212;
[00132] Analysis unit – 214;
[00133] Generating unit - 216;
[00134] Trained Model –218;
[00135] Database –220;
[00136] Primary processor- 302;
[00137] Memory- 304;
[00138] User Interface – 306;
[00139] User – 404;
[00140] GUI Login unit– 406;
[00141] FMS Interface – 408;
[00142] Data Integration unit– 410;
[00143] Data Preprocessing module – 412;
[00144] Algorithm execution module – 414;
[00145] Data Lake– 416;
[00146] Workflow Manager – 418;
[00147] Trend Analysis unit– 420;
[00148] User Interface – 422.

,CLAIMS:CLAIMS:
We Claim:
1. A method (600) of managing network slicing in a network, the method (600) comprising the steps of:
retrieving (602), by one or more processors (202), multiple types of data from one or more sources (110);
preprocessing (604), by the one or more processors (202), the multiple types of retrieved data;
feeding (606), by the one or more processors (202), the pre-processed data to a model (218) for training;
analysing (608), by the one or more processors (202), utilizing the trained model (218), the pre-processed data to predict new network slicing plans; and
generating (610), by the one or more processors (202), a visual representation of the predicted new network slicing plans based on the analysis.

2. The method (600) as claimed in claim 1, wherein the multiple types of data include at least one of, customers onboarding data, customers deactivation data, customer historical data, historical network slice data, current network slice data and service-based data/logs.

3. The method (600) as claimed in claim 1, wherein the one or more processors (202) retrieves the multiple types of data from the one or more sources (110) in real time or non-real time, wherein the non-real time represents retrieving the multiple types of data which is stored in the one or more sources (110).

4. The method (600) as claimed in claim 1, wherein the multiple types of data are retrieved from the one or more sources (110) based on receiving a request from at least one of, a user or an entity, wherein the entity includes at least one of, a network component, application or a microservice.

5. The method (600) as claimed in claim 1, wherein the preprocessing of the retrieved data includes at least one of, normalizing the retrieved data and cleaning the retrieved data.

6. The method (600) as claimed in claim 5, wherein the pre-processed data is stored in a storage unit.

7. The method (600) as claimed in claim 1, wherein the step of analysing, by the one or more processors (202), utilizing the trained model (218), the retrieved data to predict new network slicing plans, includes the steps of:
performing, by the one or more processors (202), utilizing the trained model (218), a trend/pattern analysis related to one or more parameters of the plurality of customers; and
predicting, by the one or more processors (202), utilizing the trained model (218), the new network slicing plans based on the trend/pattern analysis.

8. The method (600) as claimed in claim 7, wherein the one or more parameters pertaining to the plurality of customers includes at least one of, historical data pertaining to the historical network slice data and the current network slice data.

9. The method (600) as claimed in claim 7, wherein the step of, predicting, utilizing the trained model (218), the new network slicing plans based on the trend/pattern analysis, further includes the steps of:
enabling, by the one or more processors (202), a user to interact with a Graphical User Interface (GUI) running on a User Equipment (UE (102); and
allowing, by the one or more processors (202), the user to customize the predicted new network slicing plans based on the interaction of the user with the GUI using one or more tools.

10. The method (600) as claimed in claim 1, wherein the predicted new network slicing plans are stored in the storage unit.

11. The method (600) as claimed in claim 1, wherein the step of generating, by the one or more processors (202), a visual representation of the predicted new network slicing plans based on the analysis, further includes the step of:
displaying, by the one or more processors (202), the generated visual representation of the predicted new network slicing plans to the user on the GUI of the UE (102).

12. The method (600) as claimed in claim 11, wherein the generated new network slicing plans are displayed to the user in at least one of, a report, a graphical representation, and a pictorial representation.

13. The method (600) as claimed in claim 1, wherein a result of the predicted new network slicing plans is transmitted to at least one of, the user or the entity, wherein the entity includes at least one of, the network component, the application or the microservice.

14. A system (108) for managing network slicing in a network, the system (108) comprising:
a retrieving unit (208) (208), configured to, retrieve, multiple types of data from one or more sources (110);
a preprocessing unit (210), configured to, preprocess, the multiple types of retrieved data;
a feeding (212), configured to, feed, the pre-processed data to a model (218) for training;
an analysis unit (214), configured to, analyse, utilizing the trained model (218), the pre-processed data to predict new network slicing plans; and
a generating unit (216), configured to, generate, a visual representation of the predicted new network slicing plans based on the analysis.

15. The system (108) as claimed in claim 14, wherein the multiple types of data include at least one of, customers onboarding data, customers deactivation data, customer historical data, historical network slice data, current network slice data and service-based data/logs.

16. The system (108) as claimed in claim 14, wherein the retrieving unit (208) retrieves the multiple types of data from the one or more sources (110) in real time or non-real time, wherein the non-real time represents retrieving the multiple types of data which is stored in the one or more sources (110).

17. The system (108) as claimed in claim 14, wherein the retrieving unit (208), retrieves the multiple types of data from the one or more sources (110) based on receiving a request from at least one of, a user or an entity, wherein the entity includes at least one of, a network component, application or a microservice.

18. The system (108) as claimed in claim 14, wherein the preprocessing of the retrieved data includes at least one of, normalizing the retrieved data and cleaning the retrieved data.

19. The system (108) as claimed in claim 18, wherein the pre-processed data is stored in a storage unit.

20. The system (108) as claimed in claim 14, wherein the analysis unit (214), analyses, utilizing the trained model (218), the retrieved data to predict new network slicing plans, by:
performing, utilizing the trained model (218), a trend/pattern analysis related to one or more parameters of the plurality of customers; and
predicting, utilizing the trained model (218), the new network slicing plans based on the trend/pattern analysis.

21. The system (108) as claimed in claim 20, wherein the one or more parameters pertaining to the plurality of customers includes at least one of, historical data pertaining to the historical network slice data and the current network slice data.

22. The system (108) as claimed in claim 14, wherein the analysis unit (214), is further configured to:
enable, a user to interact with a Graphical User Interface (GUI) running on a User Equipment (UE (102)); and
allow, the user to customize the predicted new network slicing plans based on the interaction of the user with the GUI using one or more tools.

23. The system (108) as claimed in claim 14, wherein the predicted new network slicing plans are stored in the storage unit.

24. The system (108) as claimed in claim 14, wherein the generating unit (216), generates, a visual representation of the predicted new network slicing plans based on the analysis, and further configured to:
display, the generated visual representation of the predicted new network slicing plans to the user on the GUI of the UE (102).

25. The system (108) as claimed in claim 24, wherein the generated new network slicing plans are displayed to the user in at least one of, a report, a graphical representation, and a pictorial representation.

26. The system (108) as claimed in claim 14, wherein a result of the predicted new network slicing plans is transmitted to at least one of, the user or the entity, wherein the entity includes at least one of, the network component, the application or the microservice.

27. A User Equipment (UE (102)), comprising:
one or more primary processors (302) communicatively coupled to one or more processors (202), the one or more primary processors (302) coupled with a memory, wherein said memory stores instructions which when executed by the one or more primary processors (302) causes the UE (102) to:
transmit, a request by the user to the one or more processors (202) for predicting the new network slicing plans; and
wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321068458-STATEMENT OF UNDERTAKING (FORM 3) [11-10-2023(online)].pdf 2023-10-11
2 202321068458-PROVISIONAL SPECIFICATION [11-10-2023(online)].pdf 2023-10-11
3 202321068458-FORM 1 [11-10-2023(online)].pdf 2023-10-11
4 202321068458-FIGURE OF ABSTRACT [11-10-2023(online)].pdf 2023-10-11
5 202321068458-DRAWINGS [11-10-2023(online)].pdf 2023-10-11
6 202321068458-DECLARATION OF INVENTORSHIP (FORM 5) [11-10-2023(online)].pdf 2023-10-11
7 202321068458-FORM-26 [27-11-2023(online)].pdf 2023-11-27
8 202321068458-Proof of Right [12-02-2024(online)].pdf 2024-02-12
9 202321068458-DRAWING [11-10-2024(online)].pdf 2024-10-11
10 202321068458-COMPLETE SPECIFICATION [11-10-2024(online)].pdf 2024-10-11
11 Abstract.jpg 2025-01-06
12 202321068458-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
13 202321068458-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
14 202321068458-Covering Letter [24-01-2025(online)].pdf 2025-01-24
15 202321068458-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
16 202321068458-FORM 3 [29-01-2025(online)].pdf 2025-01-29