Sign In to Follow Application
View All Documents & Correspondence

System And Method For Managing Operations In A Network

Abstract: ABSTRACT SYSTEM AND METHOD FOR MANAGING OPERATIONS IN A NETWORK The present invention relates to a system (108) and a method (600) for managing operations in a network (106). The method (600) includes step of retrieving, data pertaining to the operation of the network (106) from an inventory unit (206). The method (600) further includes the step of training, an Artificial Intelligence/Machine Learning (AI/ML) model (212) utilizing the retrieved data pertaining to the operation of the network (106). The method (600) further includes the step of predicting future trends of the network (106) utilizing the trained AI/ML model (212). Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
05 August 2023
Publication Number
06/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,

Inventors

1. Aayush Bhatnagar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
2. Ankit Murarka
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
3. Rizwan Ahmad
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
4. Kapil Gill
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
5. Rahul Verma
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
6. Arpit Jain
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
7. Shashank Bhushan
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
8. Kamal Malik
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
9. Prakash Gaikwad
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
10. Sameer Magu
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
11. Rohtas Godara
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
12. Munir Bashir Sayyad
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
13. Mayur Muralidhar Murkya
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
14. Kalidindi Vijaya Rama Raju
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
15. Anup Bhaskar Patil
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
16. Sunil Kumar Saraswat
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR MANAGING OPERATIONS IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication systems, more particularly relates to a method and a system for managing operations in a network.
BACKGROUND OF THE INVENTION
[0002] A Unified Interaction Management (UIM) system is responsible for managing user interactions, handling requests, and coordinating communication between different components.
[0003] The UIM system processes a wide range of operations and requests, such as call management, service provisioning, network configuration, and user authentication, among others. These operations collectively contribute to the overall performance, efficiency, and user experience.
[0004] Further, the traditional approaches often rely on manual tracking and reporting of the above-mentioned operation, which are time-consuming, error-prone, and limited in their ability to handle large volumes of data. Consequently, telecom operators face challenges in gaining actionable insights into inventory usage patterns, anticipating resource requirements, and optimizing network operations.
[0005] Hence, there is a need for efficient methods and systems to identify and analyze trends associated with the operations performed by the UIM systems in telecom networks.
SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provides a method and a system for managing operations in a network.
[0007] In one aspect of the present invention, the method for managing the operations in the network is disclosed. The method includes the step of retrieving, by one or more processors, data pertaining to the operation of the network from an inventory unit. The method further includes the step of training, by the one or more processors, an Artificial Intelligence/Machine Learning (AI/ML) model utilizing the retrieved data pertaining to the operation of the network. The method further includes the step of predicting, by the one or more processors, future trends of the network utilizing the trained AI/ML model.
[0008] In another embodiment, to retrieve the data pertaining to the operation of the network the method comprises the steps of, recording, by the one or more processors, the data pertaining to the operation of the network in an Application Data Record (ADR) file, wherein the operation corresponds to at least one of customer provisioning order and customer migration order, wherein the data is at least one of a number of customers available in the network at a given instant. Further, extracting, by the one or more processors, the data pertaining to the operation of the network from the ADR file. Thereafter, storing, by the one or more processors, extracted data pertaining to the operation of the network in the inventory unit.
[0009] In yet another embodiment, the future trends correspond to at least one of an overall growth of the network and a circle wise growth of the network.
[0010] In yet another embodiment, to predict the future trends, the method comprises the steps of, generating, by the one or more processors, historical and current trends associated with the operation of the network upon training of the AI/ML model. thereafter, the method includes the step of analysing, by the one or more processors, utilizing the trained AI/ML model, the historical and the current trends associated with the operation of the network to detect a pattern between the historical trends and the current trends, wherein the future trends are predicted based on the detected pattern.
[0011] In yet another embodiment, the historical trends and the current trends corresponds to a number of customers available in at least one of the circle of the network and the overall network.
[0012] In another aspect of the present invention, the system for managing the operations in the network is disclosed. The system includes a retrieving unit configured to retrieve data pertaining to the operation of the network from an inventory unit. The system includes a training unit configured to train an Artificial Intelligence/Machine Learning (AI/ML) model utilizing the retrieved data pertaining to the operation of the network. The system further includes a prediction unit configured to predict future trends of the network utilizing the trained AI/ML model.
[0013] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor. The processor is configured to retrieve data pertaining to the operation of the network from the inventory unit. The processor is further configured to train an Artificial Intelligence/Machine Learning (AI/ML) model utilizing the retrieved data pertaining to the operation of the network. The processor is further configured to predict future trends of the network utilizing the trained AI/ML model.
[0014] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 is an exemplary block diagram of an environment for managing operations in a network, according to one or more embodiments of the present invention;
[0017] FIG. 2 is an exemplary block diagram of a system for managing operations in the network, according to one or more embodiments of the present invention;
[0018] FIG. 3 is an exemplary architecture of the system of FIG. 2, according to one or more embodiments of the present invention;
[0019] FIG. 4 is an exemplary architecture illustrating the flow for managing operations in the network, according to one or more embodiments of the present disclosure;
[0020] FIG. 5 is an exemplary signal flow diagram illustrating the flow for managing operations in the network, according to one or more embodiments of the present disclosure; and
[0021] FIG. 6 is a flow diagram of a method for managing operations in the network, according to one or more embodiments of the present invention.
[0022] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0023] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0024] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0025] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0026] The present invention discloses a system and a method for managing operations in a network. More particularly, the system described herein offers a comprehensive approach for predicting future requirements, and/or potential issues with an inventory unit such as a Unified Inventory Management (UIM) unit. The prediction or forecasting is based on identifying/analyzing trends of current operations performed by the inventory unit. The system uses an Artificial Intelligence/Machine Learning (AI/ML) model to capture current trends, perform analysis, and predict or forecast requirements and/or potential issues in the inventory unit.
[0027] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for managing operations in a network, according to one or more embodiments of the present invention. The environment 100 includes a User Equipment (UE) 102, a server 104, a network 106, and a system 108. A user interacts with the system 108 utilizing the UE 102.
[0028] For the purpose of description and explanation, the description will be explained with respect to one or more user equipment’s (UEs) 102, or to be more specific will be explained with respect to a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the at least one UE 102 namely the first UE 102a, the second UE 102b, and the third UE 102c is configured to connect to the server 104 via the network 106.
[0029] In an embodiment, each of the first UE 102a, the second UE 102b, and the third UE 102c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0030] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0031] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
[0032] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, a processor executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0033] The environment 100 further includes the system 108 communicably coupled to the server 104, and the UE 102 via the network 106. The system 108 is adapted to be embedded within the server 104 or is embedded as the individual entity.
[0034] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0035] FIG. 2 is an exemplary block diagram of the system 108 for managing the operations in the network 106, according to one or more embodiments of the present invention.
[0036] As per the illustrated and preferred embodiment, the system 108 for managing the operations in the network 106, includes one or more processors 202, a memory 204, and an inventory unit 206. The one or more processors 202, hereinafter referred to as the processor 202, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. However, it is to be noted that the system 108 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
[0037] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204 as the memory 204 is communicably connected to the processor 202. The memory 204 is configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to manage operations in the network 106. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0038] As per the illustrated embodiment, the inventory unit 206 is configured to store data associated with the operations performed in the network 106. The inventory unit 206 is one of, but not limited to, the Unified Inventory Management (UIM) unit, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of inventory unit 206 types are non-limiting and may not be mutually exclusive e.g., the database can be both commercial and cloud-based, or both relational and open-source, etc.
[0039] In one embodiment, the inventory unit 206, such as, a Unified Inventory Management (UIM) unit is a standard based telecommunications inventory management application that enables users to model and manage customers, services, and resources. The UIM unit serves as the backbone of the network 106. The inventory unit 206 stores the logical and physical inventory data of every asset, device, node, and application. In particular, the inventory unit 206 serves as a central repository for storing customer related information. The customer related information includes at least one of, but not limited to, a name, an address, a location, a mobile number, subscription plans and a number of customers in the network 106.
[0040] As per the illustrated embodiment, the system 108 includes the processor 202 to manage operations in the network 106. The processor 202 includes a retrieving unit 208, a training unit 210, an Artificial Intelligence/Machine Learning (AI/ML) model 212, and a prediction unit 214. The processor 202 is communicably coupled to the one or more components of the system 108 such as the inventory unit 206, and the memory 204. In an embodiment, operations and functionalities of the retrieving unit 208, the training unit 210, the AI/ML model 212, the prediction unit 214, and the one or more components of the system 108 can be used in combination or interchangeably.
[0041] In an embodiment, the retrieving unit 208 of the processor 202 is configured to retrieve the data pertaining to the operation of the network 106 from the inventory unit 206. In one embodiment, the data is at least one of, but not limited to a number of customers available in the network 106 at a given instant. In particular, the operation corresponds to at least one of, but not limited to, a customer provisioning order and a customer migration order. In one embodiment, the customer provisioning order is the process of adding customers to the network 106 and assigning telephony services to the customers. The customer provisioning order is also referred to a subscriber provisioning. In one embodiment, the customer migration order is the process of transferring customers from an existing subscriber management system to another system.
[0042] In one embodiment, initially the operations are performed by the processor 202 based on which the data pertaining to the operation of the network 106 is recorded in an Application Data Record (ADR) file. The ADR file is an organized collection of records pertaining to the operation of the network 106. For example, the ADR file may contain records pertaining to the customers. Further, the data pertaining to the operation of the network 106 is extracted and stored in the inventory unit 206. Thereafter, the retrieving unit 208 retrieves the stored data pertaining to the operation of the network 106 from the inventory unit 206. In an alternate embodiment, the data pertaining to the operation of the network 106 is extracted and provided directly to the retrieving unit 208 without requirement of storing the data in the inventory unit 206.
[0043] In an alternate embodiment, subsequent to retrieving the stored data pertaining to the operation of the network 106 from the inventory unit 206, the processor 202 is configured to normalize the data retrieved by retrieving unit 208. In particular, the processor 202 may include a normalizer to preprocess the retrieved data. The normalizer performs at least one of, but not limited to, data normalization. The data normalization is the process of at least one of, but not limited to, reorganizing the retrieved data, removing the redundant data within the retrieved data, formatting the retrieved data and removing null values from the retrieved data. The main goal of the the normalizer is to achieve a standardized data format across the entire system 108. The normalizer ensures that the normalized data is stored appropriately in the inventory unit 206 for subsequent retrieval and analysis. In one embodiment, the data retrieved by the retrieving unit 208 is normalized by the normalizer of the processor 202.
[0044] In an embodiment, the training unit 210 of the processor 202 is configured to train the AI/ML model 212 utilizing the retrieved data pertaining to the operation of the network 106. In an alternate embodiment, the training unit 210 of the processor 202 is configured to train the normalized data associated with the operation of the network 106. While training, the AI/ML model 212 tracks and monitors the retrieved data pertaining to the operation of the network 106. Further, the AI/ML model 212 learns at least one of, but not limited to, trends and patterns associated with the operation of the network 106. For example, the system 108 selects an appropriate AI/ML model 212, such as at least one of, but not limited to, a neural network or a decision tree logic, from a set of available options of the AI/ML model 212. Thereafter, the selected AI/ML model 212 is trained using the normalized data. In one embodiment, the selected AI/ML model 212 is trained on historical data associated with the operation of the network 106.
[0045] In an embodiment, upon training, the trained AI/ML model 212 is utilized by the prediction unit 214 of the processor 202 to predict, future trends of the networks 106 utilizing the trained AI/ML model 212. In one embodiment, the future trends are general change in one variable compared to another variable over a period of time. For example, the trends pertain to changes in the number of the plurality of customers within the network 106. In particular, based on training, the AI/ML model 212 enables the prediction unit 214 to generate at least one of, historical trends and current trends associated with the operation of the network 106 by applying one or more logics. Herein, the historical trends and the current trends corresponds to at least one of number of customers available in at least one of the circle of the network 106 and the overall network 106.
[0046] In one embodiment, the one or more logics may include at least one of, but not limited to, a k-means clustering, a hierarchical clustering, a Principal Component Analysis (PCA), an Independent Component Analysis (ICA), a deep learning logics such as Artificial Neural Networks (ANNs), a Convolutional Neural Networks (CNNs), a Recurrent Neural Networks (RNNs), a Long Short-Term Memory Networks (LSTMs), a Generative Adversarial Networks (GANs), a Q-Learning, a Deep Q-Networks (DQN), a Reinforcement Learning Logics, etc.
[0047] Thereafter, the prediction unit 214 analyses the generated historical trends and the current trends associated with the operation of the network 106 utilizing the trained AI/ML model 212 to detect a pattern between the generated historical trends and the current trends. In one embodiment, the pattern is a series of data that repeats in a recognizable way. For example, the pattern pertains to the number of the customers that are added every month in the network 106. Based on the detected pattern, the prediction unit 214 predicts the future trends of the network 106. Herein, the future trends correspond to at least one of, but not limited to, an overall growth of the network 106 and a circle wise growth of the network 106.
[0048] In one embodiment, the prediction unit 214 predicts at least one of but not limited to, the future requirements, and one or more potential issues associated with the inventory unit 206. For example, the prediction unit 214 predicts the trend pertaining to the number of the customers that will be added in the network 106 in future based on which network operators can take one or more actions such as at least one of, but not limited to, expanding the network 106 to maintain the quality of the services such as the telephony services provided to the customers.
[0049] The retrieving unit 208, the training unit 210, the AI/ML model 212, and the prediction unit 214, in an exemplary embodiment, are implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor 202. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0050] FIG. 3 illustrates an exemplary architecture for the system 108, according to one or more embodiments of the present invention. More specifically, FIG. 3 illustrates the system 108 for managing operations in the network 106. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the UE 102 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0051] FIG. 3 shows communication between the UE 102, and the system 108. For the purpose of description of the exemplary embodiment as illustrated in FIG. 3, the UE 102, uses network protocol connection to communicate with the system 108. In an embodiment, the network protocol connection is the establishment and management of communication between the UE 102, and the system 108 over the network 106 (as shown in FIG. 1) using a specific protocol or set of protocols. The network protocol connection includes, but not limited to, Session Initiation Protocol (SIP), System Information Block (SIB) protocol, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), Simple Network Management Protocol (SNMP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol Secure (HTTPS) and Terminal Network (TELNET).
[0052] In an embodiment, the UE 102 includes a primary processor 302, and a memory 304 and a User Interface (UI) 306. In alternate embodiments, the UE 102 may include more than one primary processor 302 as per the requirement of the network 106. The primary processor 302, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0053] In an embodiment, the primary processor 302 is configured to fetch and execute computer-readable instructions stored in the memory 304. The memory 304 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to manage operations in the network 106. The memory 304 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0054] In an embodiment, the User Interface (UI) 306 includes a variety of interfaces, for example, a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The User Interface (UI) 306 allows the user to transmit the request to the system 108 for performing the operation. In one embodiment, the user may include at least one of, but not limited to, a network operator.
[0055] In accordance with the exemplary embodiment, let us assume the inventory unit 206 is hosted on the server 104. The inventory unit 206 is configured to respond to all the requests received from the UE 102. Based on the requests, the inventory unit 206 performs the operation such as storing the customers details. Further, the data pertaining to the operation performed by the inventory unit 206 is stored in the ADR file format, or any other defined format. The ADR file may be written and stored by the inventory unit 206. Further the retrieving unit 208 is configured to retrieve data stored in the ADR file. Upon retrieving the data, the AI/ML model 212 is trained utilizing the retrieved data, and the prediction unit 214 utilizes the trained AI/ML model 212 to predict future trends of the network 106 associated with the operation of the network 106.
[0056] As mentioned earlier in FIG.2, the system 108 includes the processors 202, the memory 204, and the inventory unit 206, for managing operations in the network 106, which are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0057] Further, as mentioned earlier the processor 202 includes the retrieving unit 208, the training unit 210, the AI/ML model 212, the prediction unit 214, which are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0058] FIG. 4 is an architecture illustrating the flow for managing operations in the network 106, according to one or more embodiments of the present disclosure.
[0059] In one embodiment, the architecture 400 includes an external system 402, a UIM unit 404, a data lake 406, an ADR file 408 and the AI/ML model 212. Initially, the external system 402 such as the UE 102 transmits the request to the UIM unit 404 in order to perform a particular operation. Herein the UIM unit 404 is the inventory unit 206 configured to perform the particular operation. In one embodiment, the UIM unit 404 stores the data pertaining to the particular operation in the data lake 406 for future analysis and retrieval. Further, the UIM unit 404 captures and stores the data pertaining to the operation in the ADR file 408. Furthermore, the ADR file 408 is fed to the AI/ML model 212 for training.
[0060] Thereafter, the AI/ML model 212 generates the historical trends and the current trends pertaining to the operation and detects the pattern between the generated historical trends and the current trends. Based on the detected pattern, the AI/ML model 212 predicts the future trends pertaining to the operation associated with the UIM unit 404. For example, the AI/ML model 212 predicts future trends pertaining to the overall growth of the customers in the network 106 or circle wise customers growth in the network 106 based on the operation performed by the UIM unit 404.
[0061] FIG. 5 is a signal flow diagram illustrating the flow for managing operations in the network 106, according to one or more embodiments of the present disclosure.
[0062] At step 502, the UE 102 transmits the request to the inventory unit 206 in order to perform operations related to the plurality of customers. For example, the operation may be storing information of the plurality of customers while adding the plurality of customers in the network 106.
[0063] At step 504, the inventory unit 206 stores the data related to the operations in the ADR file format subsequent to performing the operations. In one embodiment, the inventory unit 206 stores the data related to the operations in any other predefined format.
[0064] At step 506, the inventory unit 206 transmits the data related to the operations performed to the AI/ML model 212 of the processor 202. In alternate embodiment, the processor 202 retrieves the data related to the operations from the inventory unit 206. In yet another embodiment, the processor 202 retrieves the ADR file related to the operations from the inventory unit 206.
[0065] At step 508, the processor 202 trains the AI/ML model 212 utilizing the retrieved data. In alternate embodiment, the processor 202 trains the AI/ML model 212 by feeding the ADR file to the AI/ML model 212.
[0066] At step 510, the processor 202 predicts the future trends pertaining to the operation utilizing the trained AI/ML model 212 by generating the historical trends and the current trends and further, analyzing the generated historical and the current trends associated with the operation of the network 106 to detect the pattern between the historical trends and the current trends. Thereafter, the future trends are predicted based on the detected pattern.
[0067] At step 512, the processor 202 transmits the predicted future trends to the UE 102. In particular, the UE 102 displays the predicted future trends to the network operator via the UI 306. Based on the predicted future trends, the network operator may plan for the one or more actions in order to avoid the degradation of the network 106.
[0068] FIG. 6 is a flow diagram of a method 600 for managing operations in the network 106, according to one or more embodiments of the present invention. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0069] At step 602, the method 600 includes the step of retrieving the data pertaining to the operation of the network 106 from the inventory unit 206. In one embodiment, the retrieving unit 208 retrieves the data pertaining to the operation of the network 106 from the inventory unit 206. In particular, the operation corresponds to at least one of, the customer provisioning order and the customer migration order. Further, the data corresponds to at least one of, but not limited to, the number of customers available in the network 106. For example, let us assume the operation pertains to adding the customers in the network 106. At the time of operation, the data pertaining to the operation such as customers details are added in the ADR file. Then the customers details are extracted from the ADR file and the customers details are stored in the inventory unit 206. Thereafter, the customers details are retrieved from the inventory unit 206 by the retrieving unit 208.
[0070] At step 604, the method 600 includes the step of training the Artificial Intelligence/Machine Learning (AI/ML) model 212 utilizing the retrieved data pertaining to the operation of the network 106. In particular, subsequent to retrieving the data from inventory unit 206, the training unit 210 trains the AI/ML model 212 utilizing the retrieved data. Based on training, the AI/ML model 212 identifies the trends and patterns pertaining to the operation of the network 106.
[0071] At step 606, the method 600 includes the step of predicting the future trends of the network 106 utilizing the trained AI/ML model 212. In one embodiment, the prediction unit 214 utilizes the trained AI/ML model 212 to generate the historical and the current trends associated with the operation of the network 106. For example, the prediction unit 214 generate trends pertaining to the number of customers added in the network 106 in last three months and the number of customers added in the network 106 in a current month.
[0072] Thereafter, utilizing the trained AI/ML model 212, the prediction unit 214 analyses the historical and the current trends associated with the operation of the network to detect the pattern between the historical trends and the current trends. For example, by comparing the number of customers added in the network 106 in last three months with the number of customers added in the network 106 in the current month, the prediction unit 214 detects the pattern pertaining to the increasing number of customers that are added in the network 106 every month.
[0073] Based on the detected pattern, the prediction unit 214 predicts the future trends corresponding to at least one of, but not limited to, the overall growth of the network and the circle wise growth of the network 106. For example, let us assume 1 lakh customers are added to the network 106 in every month of the current year. Based on the detected pattern pertaining to customers added in the network 106, the prediction unit 214 predicts the future trends corresponding to the future customers that may be added in the network 106 in an upcoming year. In other words, the prediction unit 214 predicts the number of customers that may be added in the network 106 in the future.
[0074] In one embodiment, the predicted future trends are displayed to the user on the UI 306. Due to the prediction, the network operator understands whether there is a need for expansion of the network 106 for the number of customers that may be added in the network 106 in the future. For example, based on the predicted future trends the network operator may perform one or more actions such as at least one of, but not limited to, adding one or more resources in the network 106 and expansion of the network 106 in order to manage the number of customers that may be added in the network 106 in the future.
[0075] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to retrieve the data pertaining to the operation of the network 106 from an inventory unit 206. The processor 202 is further configured to train an Artificial Intelligence/Machine Learning (AI/ML) model 212 utilizing the retrieved data pertaining to the operation of the network 106. The processor 202 is further configured to predict future trends of the network 106 utilizing the trained AI/ML model 212.
[0076] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0077] The present disclosure provides technical advancement. By predicting the future trends, the network operators understand the upcoming requirements in the network. Based on prediction of the future trends, the network operators manage the inventory, these advancements lead to increased customer satisfaction and improved network efficiency. The inventory units have operation backup data written in the files.
[0078] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS

[0079] Environment - 100;
[0080] User Equipment (UE) - 102;
[0081] Server - 104;
[0082] Network- 106;
[0083] System -108;
[0084] Processor - 202;
[0085] Memory - 204;
[0086] Inventory unit – 206;
[0087] Retrieving unit – 208;
[0088] Training unit – 210;
[0089] AI/ML model– 212;
[0090] Predicition unit – 214;
[0091] Primary Processor – 302;
[0092] Memory – 304;
[0093] User Interface (UI) – 306;
[0094] External system – 402,
[0095] UIM unit – 404;
[0096] Data lake – 406;
[0097] ADR file – 408.

,CLAIMS:CLAIMS
We Claim:
1. A method (600) of managing operations in a network (106), the method (600) comprising the steps of:
retrieving, by one or more processors (202), data pertaining to the operation of the network (106) from an inventory unit (206);
training, by the one or more processors (202), an Artificial Intelligence/Machine Learning (AI/ML) model (212) utilizing the retrieved data pertaining to the operation of the network (106); and
predicting, by the one or more processors (202), future trends of the network (106) utilizing the trained AI/ML model (212).

2. The method (600) as claimed in claim 1, wherein to retrieve the data pertaining to the operation of the network (106) the method (600) comprises the steps of:
recording, by the one or more processors (202), the data pertaining to the operation of the network (106) in an Application Data Record (ADR) file, wherein the operation corresponds to at least one of customer provisioning order and customer migration order, wherein the data is at least one of a number of customers available in the network (106) at a given instant;
extracting, by the one or more processors (202), the data pertaining to the operation of the network (106) from the ADR file; and
storing, by the one or more processors (202), extracted data pertaining to the operation of the network (106) in the inventory unit (206).

3. The method (600) as claimed in claim 1, wherein the future trends correspond to at least one of an overall growth of the network (106) and a circle wise growth of the network (106).

4. The method (600) as claimed in claim 1, wherein to predict the future trends, the method (600) comprises the steps of:
generating, by the one or more processors (202), historical and current trends associated with the operation of the network (106) upon training of the AI/ML model (212); and
analysing, by the one or more processors (202), utilizing the trained AI/ML model (212), the historical and the current trends associated with the operation of the network (106) to detect a pattern between the historical trends and the current trends, wherein the future trends are predicted based on the detected pattern.

5. The method (600) as claimed in claim 4, wherein the historical trends and the current trends corresponds to a number of customers available in at least one of the circle of the network (106) and the overall network (106).

6. A system (108) for managing operations in a network (106), the system (108) comprising:
a retrieving unit (208) configured to retrieve, data pertaining to the operation of the network (106) from an inventory unit;
a training unit (210) configured to train, an Artificial Intelligence/Machine Learning (AI/ML) model (212) utilizing the retrieved data pertaining to the operation of the network (106); and
a prediction unit (214) configured to predict, future trends of the network (106) utilizing the trained AI/ML model (212).

7. The system (108) as claimed in claim 6, wherein the retrieving unit (208) retrieves the data pertaining to the operation of the network (106) by:
recording, the data pertaining to the operation of the network (106) in an Application Data Record (ADR) file, wherein the operation corresponds to at least one of customer provisioning order and customer migration order, wherein the data is at least one of a number of customers available in the network (106) at a given instant;
extracting, the data pertaining to the operation of the network (106) from the ADR file; and
storing, extracted data pertaining to the operation of the network (106) in the inventory unit (206).

8. The system (108) as claimed in claim 6, wherein the future trends correspond to at least one of an overall growth of the network (106) and a circle wise growth of the network (106).

9. The system (108) as claimed in claim 6, wherein the prediction unit (214) is configured to:
generate, historical and current trends associated with the operation of the network (106) upon training of the AI/ML model (212); and
analyse, utilizing the trained AI/ML model (212), the historical and the current trends associated with the operation of the network (106) to detect a pattern between the historical trends and the current trends, wherein the future trends are predicted based on the detected pattern.

10. The system (108) as claimed in claim 9, wherein the historical trends and the current trends corresponds to a number of customers available in at least one of the circle of the network (106) and the overall network (106).

Documents

Application Documents

# Name Date
1 202321052739-STATEMENT OF UNDERTAKING (FORM 3) [05-08-2023(online)].pdf 2023-08-05
2 202321052739-PROVISIONAL SPECIFICATION [05-08-2023(online)].pdf 2023-08-05
3 202321052739-FORM 1 [05-08-2023(online)].pdf 2023-08-05
4 202321052739-FIGURE OF ABSTRACT [05-08-2023(online)].pdf 2023-08-05
5 202321052739-DRAWINGS [05-08-2023(online)].pdf 2023-08-05
6 202321052739-DECLARATION OF INVENTORSHIP (FORM 5) [05-08-2023(online)].pdf 2023-08-05
7 202321052739-FORM-26 [03-10-2023(online)].pdf 2023-10-03
8 202321052739-Proof of Right [08-01-2024(online)].pdf 2024-01-08
9 202321052739-DRAWING [31-07-2024(online)].pdf 2024-07-31
10 202321052739-COMPLETE SPECIFICATION [31-07-2024(online)].pdf 2024-07-31
11 Abstract-1.jpg 2024-10-11
12 202321052739-Power of Attorney [25-10-2024(online)].pdf 2024-10-25
13 202321052739-Form 1 (Submitted on date of filing) [25-10-2024(online)].pdf 2024-10-25
14 202321052739-Covering Letter [25-10-2024(online)].pdf 2024-10-25
15 202321052739-CERTIFIED COPIES TRANSMISSION TO IB [25-10-2024(online)].pdf 2024-10-25
16 202321052739-FORM 3 [02-12-2024(online)].pdf 2024-12-02
17 202321052739-FORM 18 [20-03-2025(online)].pdf 2025-03-20