Abstract: The present disclosure relates to a system (108) and a method (500) for performing dynamic service and resource management in a network environment, comprising a receiving unit (208) configured to receive at least one request from a user (102), a processing engine (210) configured to determine a type of the received at least one request that is indicative of a type of action that is to be performed on a data record corresponding to/associated with a network function in responding to the request, perform at least one action based on the determined type of request, generate a processed data on completion of the at least one action and a plurality of database(s) (220) configured to store the processed data. FIGURE 2
FORM 2
THE PATENTS ACT, 1970 (39 of 1970) THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10; rule 13)
TITLE OF THE INVENTION
SYSTEM AND METHOD FOR DYNAMIC SERVICE AND RESOURCE MANAGEMENT
APPLICANT
JIO PLATFORMS LIMITED
of Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad -
380006, Gujarat, India; Nationality : India
The following specification particularly describes
the invention and the manner in which
it is to be performed
RESERVATION OF RIGHTS
[001] A portion of the disclosure of this patent document contains
material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (herein after referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
TECHNICAL FIELD
[002] The present disclosure relates to a field of communications network,
and specifically to a system and a method for dynamic service and resource management with respect to adaptive troubleshooting operations management.
BACKGROUND
[003] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[004] In general, in a communications network, in order to manage
troubleshooting operations and on-board a new vendor system, every time code-level development and testing may need to be done because every new source system has its way of generating, handling, and storing data. Further, each vendor may have a different operating system, where the data is getting stored.
[005] In general, network debugging involves overdependence on the deep
technical knowhow of the support team who may have to dig into massive rolling logs and data records to gather information, correlate, and infer the root cause often taking hours. There is a constant need to upgrade the technical know-how of the support team through technical training on vendor products and dependency on the vendor to impart complete knowledge transfer. There is also a need for constant upgradation of network probes and hardware.
[006] Further, root cause analysis is manual, i.e., root cause analysis is a
laborious process and requires deep technical know-how and project coordination. The support team may need to be empowered through machine learning to find the concerns in the network swiftly to drive action.
[007] However, relying solely on reactive monitoring, where issues are
identified only after customers report them, creates a poor customer experience. Additionally, conventional systems and methods possess massive hardware requirements. A massive cluster of database resources to store and analyze data collected from different software and hardware probes is required. In fact, vendor products are very centric to use cases and inhibit on-the-fly changes or requirements, increasing the time to market and time to production.
[008] There is, therefore, a need in the art to provide a system and a method
that can overcome the deficiencies of the prior arts.
OBJECTS OF THE PRESENT DISCLOSURE
[009] It is an object of the present disclosure to provide a system and a
method for dynamic service and resource management with respect to adaptive troubleshooting, operations, and management.
[0010] It is an object of the present disclosure to provide the system and the
method for real-time resource management in a network, enabling dynamic service management and ensuring that network resources are allocated efficiently, and services are adapted to meet current demands and conditions.
[0011] It is an object of the present disclosure to save time and resources.
SUMMARY
[0012] The present disclosure discloses a system for performing dynamic
service and resource management in a network environment. The system comprises a receiving unit configured to receive at least one request from a user, a processing engine configured to determine a type of the received at least one request that is indicative of a type of action that is to be performed on a data record corresponding to/associated with a network function in responding to the request, perform at least one action based on the determined type of request , generate a processed data on completion of the at least one action and a plurality of database(s) configured to store the processed data
[0013] In one embodiment, the processing engine comprises an ingestion
layer engine configured to ingest at least one type of the data received from the receiving unit and generate an ingested data, a normalization layer engine configured for normalizing the ingested data, a message broker platform configured to receive the normalized data and manage real-time data streams catering to a plurality of users and a scheduling layer engine configured to enable the plurality of users to execute a plurality of tasks based on user-configured intervals corresponding to the real-time data streams managed by the message broker platform.
[0014] In one embodiment, the network function includes a Virtual
Network Function (VNF), a Physical Network Function (PNF), or a combination thereof.
[0015] In one embodiment, the at least one action is selected from search,
view, analyze, generate reports, monitor specific error codes, service allocation, and resource management.
[0016] In one embodiment, the system is further configured to employ a
configuration management, an alarm management, and a counter management.
[0017] In one embodiment, a microservice-based architecture is configured
to employ machine learning as a Service (MLaaS) to enable a plurality of operations within a distributed data lake.
[0018] In one embodiment, the normalization layer engine is configured to
provide the normalized data to an analysis engine, a correlation engine, a service quality manager, and a streaming engine.
[0019] In one embodiment, the data generated by the analysis engine, the
correlation engine, the service quality manager, and the streaming engine is a geographic location-based data to be presented over a map on a Graphic User Interface (GUI) by a mapping layer engine.
[0020] In one embodiment, a forecasting engine is configured to generate
forecasts depicting future trends and outcomes by analyzing the processed data.
[0021] In one embodiment, the processing engine is configured as a parallel
computing framework that provides the execution of computing tasks in parallel.
[0022] In accordance with one embodiment of the present disclosure, a
method for performing dynamic service and resource management in a network is disclosed. The method comprises of steps: receiving, by a receiving unit, at least one request from a user, wherein the request is for a data records network function, determining by a processing engine, the type of request received, performing, by the processing engine, at least one action based on the determined type of request and storing, by a plurality of databases, the processed data.
[0023] In accordance with one embodiment of the present disclosure, a user
equipment that is communicatively coupled with a network is disclosed. The coupling comprises of receiving, by a receiving unit, at least one request from a user, determining by a processing engine, a type of the at least one received request
that is indicative of a type of action that is to be performed on a data record corresponding to/associated with a network function in responding to the request, performing, by the processing engine, at least one action based on the determined type of request, generating, by the processing engine, a processed data on completion of the at least one action, and storing, in a plurality of database(s), the processed data.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] In the figures, similar components and/or features may have the
same reference label. Further, various components of the same type may be
distinguished by following the reference label with a second label that distinguishes
among the similar components. If only the first reference label is used in the
specification, the description is applicable to any one of the similar components
having the same first reference label irrespective of the second reference label.
[0025] The diagrams are for illustration only, which thus is not a limitation
of the present disclosure, and wherein:
[0026] FIG. 1 illustrates an exemplary network architecture in which or with
which embodiments of the present disclosure may be implemented.
[0027] FIG. 2 illustrates a system for performing dynamic service and
resource management in a network, in which or with which embodiments of the
present disclosure may be implemented.
[0028] FIG. 3 illustrates an exemplary connection-level diagram
representing the interconnections of various system components, in accordance
with an embodiment of the present disclosure.
[0029] FIG. 4 (a, b) illustrates an exemplary detailed system architecture, in
accordance with an embodiment of the present disclosure.
[0030] FIG. 5 illustrates a flow chart depicting a method for performing
dynamic service and resource management in a network, in accordance with an
embodiment of the present disclosure.
[0031] FIG. 6 illustrates an exemplary computer system in which or with
which embodiments of the present disclosure may be implemented.
[0032] The foregoing shall be more apparent from the following more
detailed description of the disclosure.
LIST OF REFERENCE NUMERALS
100 - Network Architecture 5 1 102-1, 102-2…102-N - Users
104-1, 104-2… 104-N - User Equipments (UEs)
106 - Network
108 - System
202 - Processor(s) 10 204 - Memory
206 - Interface(s)
208 - Receiving unit
210 - Processing engine
220 - Database 15 313 - Element Management System (EMS)
314 - Graphic User Interface (GUI)
315 - Microservice registry manager
316 - Processor 318 - Controller
20 430 - Operations And Management Engine
432 - Ingestion Layer Engine
434 - Normalization layer
436 - 5G Probe
438 - Computation layer 25 440 - Mapping Layer Engine
444 - Correlation Engine
446 - Caching Layer Engine
448 - Load Balancer
450 - Integrated Performance Management Engine 30 452 - Analysis Engine
7
454 – API Gateway
456 – Streaming Engine
458 – 5G Security Operation Centre
460 – Reporting Engine
5 462 – Distributed Data Lake
466 – Anomaly Detection Module Engine
468 – Graph Layer Engine
470 – Forecasting Engine
474 – Parallel Computing Framework
10 476 – Distributed File System
478 – Scheduling Layer Engine
480 – Service Quality Manager
610 – External Storage Device
620 – Bus
15 630 – Main Memory
640 – Read Only Memory
650 – Mass Storage Device
660 – Communication Port
670 – Processor
20 DETAILED DESCRIPTION OF THE INVENTION
[0033] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific
25 details. Several features described hereafter can each be used independently of one
another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example embodiments of
30 the present disclosure are described below, as illustrated in various drawings in
8
which like reference numerals refer to the same parts throughout the different drawings.
[0034] The ensuing description provides exemplary embodiments only, and
is not intended to limit the scope, applicability, or configuration of the disclosure.
5 Rather, the ensuing description of the exemplary embodiments will provide those
skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
10 [0035] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to
15 obscure the embodiments in unnecessary detail. In other instances, well-known
circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0036] Also, it is noted that individual embodiments may be described as a
process that is depicted as a flowchart, a flow diagram, a data flow diagram, a
20 structure diagram, or a block diagram. Although a flowchart may describe the
operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a
25 procedure, a subroutine, a subprogram, etc. When a process corresponds to a
function, its termination can correspond to a return of the function to the calling function or the main function.
[0037] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt,
9
the subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
5 known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other elements.
10 [0038] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout
15 this specification are not necessarily all referring to the same embodiment.
Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0039] The terminology used herein is to describe particular embodiments
only and is not intended to be limiting the disclosure. As used herein, the singular
20 forms “a”, “an”, and “the” are intended to include the plural forms as well, unless
the context indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other
25 features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “and/or” includes any combinations of one or more of the associated listed items. It should be noted that the terms “mobile device”, “user equipment”, “user device”, “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the invention. These terms
30 are not intended to limit the scope of the invention or imply any specific
10
functionality or limitations on the described embodiments. The use of these terms
is solely for convenience and clarity of description. The invention is not limited to
any particular type of device or equipment, and it should be understood that other
equivalent terms or variations thereof may be used interchangeably without
5 departing from the scope of the invention as defined herein.
[0040] As used herein, an “electronic device”, or “portable electronic
device”, or “user device” or “communication device” or “user equipment” or “device” refers to any electrical, electronic, electromechanical, and computing device. The user device is capable of receiving and/or transmitting one or
10 parameters, performing function/s, communicating with other user devices, and
transmitting data to the other user devices. The user equipment may have a processor, a display, a memory, a battery, and an input-means such as a hard keypad and/or a soft keypad. The user equipment may be capable of operating on any radio access technology including but not limited to IP-enabled communication, Zig Bee,
15 Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, Wi-Fi,
Wi-Fi direct, etc. For instance, the user equipment may include, but not limited to, a mobile phone, smartphone, Virtual reality (VR) devices, Augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other device as may be obvious to a
20 person skilled in the art for implementation of the features of the present disclosure.
[0041] Further, the user device may also comprise a “processor” or
“processing unit” includes processing unit, wherein processor refers to any logic
circuitry for processing instructions. The processor may be a general-purpose
processor, a special purpose processor, a conventional processor, a digital signal
25 processor, a plurality of microprocessors, one or more microprocessors in
association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of
11
the system according to the present disclosure. More specifically, the processor is a hardware processor.
[0042] As portable electronic devices and wireless technologies continue to
improve and grow in popularity, the advancing wireless technologies for data
5 transfer are also expected to evolve and replace the older generations of
technologies. In the field of wireless data communications, the dynamic
advancement of various generations of cellular technology are also seen. The
development, in this respect, has been incremental in the order of second generation
(2G), third generation (3G), fourth generation (4G), and now fifth generation (5G),
10 and more such generations are expected to continue in the forthcoming time.
[0043] While considerable emphasis has been placed herein on the
components and component parts of the preferred embodiments, it will be
appreciated that many embodiments can be made and that many changes can be
made in the preferred embodiments without departing from the principles of the
15 disclosure. These and other changes in the preferred embodiment, as well as other
embodiments of the disclosure, will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
20 [0044] The disclosure presents a system and a method for performing
dynamic service and resource management within a network, aimed at refining the efficiency and intelligence of network operations. The system includes a data ingestion layer designed to assimilate a broad spectrum of data types, such as alarms, counters, configurations, data records, infrastructural metrics, logs, and
25 inventory data. The data ingestion layer collects the data and also validates and
channels it toward a normalization layer for standardization.
[0045] Post-normalization, the data is systematically deposited across a
plurality of databases that include a distributed data lake, caching, computation, and graph layers, effectively segregating the data for optimal utilization. A message
12
broker is employed to adeptly oversee the flow of real-time data streams adeptly, ensuring seamless communication between data producers and the users within this high-end, new-generation distributed application platform.
[0046] A scheduling layer is incorporated to manage and execute a
5 multitude of tasks at intervals predefined by the user, reflecting the adaptability of
the system to various operational requirements. The scheduling layer extends to tasks like service calls, API calls to microservices, or executions of complex queries, all of which may be dynamically configured.
[0047] The system is built on a microservice-based architecture, specially
10 tailored for leveraging Machine Learning as a Service (MLaaS) within a
disaggregated, cloud-native data lake platform. This architecture facilitates smart
operations by analyzing vast data streams to generate insights, reports, alerts, and
notifications. Furthermore, the system is equipped with various engines including
an analysis engine, a correlation engine, a service quality manager, and a streaming
15 engine, all working in unison to process geographic location-based data which can
be visually represented on a user interface map via a mapping layer.
[0048] Additionally, the system features a forecasting engine capable of
generating precise and accurate predictions, underpinned by a parallel computing
framework that allows for scalable, fault-tolerant parallel task execution. The
20 framework supports the creation of task chains and their execution, ensuring that
the system can handle various computing tasks efficiently.
[0049] The service quality manager component of the system is tasked with
orchestrating workflows for services provided to customers and is capable of responding to service outages reported by customers.
25 [0050] The various embodiments throughout the disclosure will be
explained in more detail with reference to FIG. 1- FIG. 5.
13
[0051] FIG. 1 illustrates an exemplary network architecture in which or with
which a system (108) for performing dynamic service and resource management in a network is implemented, in accordance with embodiments of the present disclosure.
5 [0052] Referring to FIG. 1, the network architecture (100) includes one or
more computing devices or user equipments (104-1, 104-2…104-N) associated with one or more users (102-1, 102-2…102-N) in an environment. A person of ordinary skill in the art will understand that one or more users (102-1, 102-2…102-N) may be individually referred to as the user (102) and collectively referred to as
10 the users (102). Similarly, a person of ordinary skill in the art will understand that
one or more user equipments (104-1, 104-2…104-N) may be individually referred to as the user equipment (104) and collectively referred to as the user equipments (104). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the
15 disclosure. Although three user equipments (104) are depicted in FIG. 1, however
any number of the user equipments (104) may be included without departing from the scope of the ongoing description.
[0053] In an embodiment, the user equipment (104) includes smart devices
operating in a smart environment, for example, an Internet of Things (IoT) system.
20 In such an embodiment, the user equipment (104) may include, but is not limited
to, smart phones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV),
25 computers, smart security system, smart home system, other devices for monitoring
or interacting with or for the users (102) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the user equipment (104) may include, but is not limited to, intelligent, multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server
30 or a cloud-computing system or any other device that is network-connected.
14
[0054] In an embodiment, the user equipment (104) includes, but is not
limited to, a handheld wireless communication device (e.g., a mobile phone, a smart
phone, a phablet device, and so on), a wearable computer device(e.g., a head-
mounted display computer device, a head-mounted camera device, a wristwatch
5 computer device, and so on), a Global positioning system (GPS) device, a laptop
computer, a tablet computer, or another type of portable computer, a media playing
device, a portable gaming system, and/or any other type of computer device with
wireless communication capabilities, and the like. In an embodiment, the user
equipment (104) includes, but is not limited to, any electrical, electronic, electro-
10 mechanical, or an equipment, or a combination of one or more of the above devices
such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a
general-purpose computer, desktop, personal digital assistant, tablet computer,
mainframe computer, or any other computing device, wherein the user equipment
(104) may include one or more in-built or externally coupled accessories including,
15 but not limited to, a visual aid device such as a camera, an audio aid, a microphone,
a keyboard, and input devices for receiving input from the user (102), or the entity
(110) such as touch pad, touch-enabled screen, electronic pen, and the like. A
person of ordinary skill in the art will appreciate that the user equipment (104) may
not be restricted to the mentioned devices and various other devices may be used.
20 [0055] Referring to FIG. 1, the user equipment (104) communicates with a
system (108), for example, through a network (106). In an embodiment, the network (106) may include at least one of a Fifth Generation (5G) network, 6G network, or the like. The network (106) may enable the user equipment (104) to communicate with other devices in the network architecture (100) and/or with the system (108).
25 The network (106) may include a wireless card or some other transceiver
connection to facilitate this communication. In another embodiment, the network (106) is implemented as, or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet,
30 the Public Switched Telephone Network (PSTN), or the like.
15
[0056] Although FIG. 1 shows exemplary components of the network
architecture (100), in other embodiments, the network architecture (100) may
include fewer components, different components, differently arranged components,
or additional functional components than depicted in FIG. 1. Additionally, or
5 alternatively, one or more components of the network architecture (100) may
perform functions described as being performed by one or more other components of the network architecture (100).
[0057] FIG. 2 illustrates an exemplary block diagram of the system (108)
for performing dynamic service and resource management in a network. The system
10 (108) may include a receiving unit (208), one or more processors (202), a memory
(204), communicably coupled to the one or more processors (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that
15 process data based on operational instructions. Among other capabilities, one or
more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in the memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and
20 executed to create or share data packets over a network service. The memory (204)
may include any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like.
25 [0058] In some embodiments, the system (108) may include an interface(s)
(206). The interface(s) (208) may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) (206) may facilitate communication of the system (108). The interface(s) (206) may also provide a communication pathway
16
for one or more components of the system (108). Examples of such components include, but are not limited to, processing unit/engine(s) (210) and a database (220).
[0059] The processing unit/engine(s) (210) may be implemented as a
combination of hardware and programming (for example, programmable
5 instructions) to implement one or more functionalities of the processing engine(s)
(210). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (210) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the
10 hardware for the processing engine(s) (210) may comprise a processing resource
(for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (210). In such examples, the system (108) may include the machine-readable storage
15 medium storing the instructions and the processing resource to execute the
instructions, or the machine-readable storage medium may be separate but accessible to the system (108) and the processing resource. In other examples, the processing engine(s) (210) may be implemented by an electronic circuitry.
[0060] The processing engine(s) (210) is configured to determine a type of
20 the received at least one request that is indicative of a type of action that is to be
performed on a data record corresponding to/associated with a network function in responding to the request. In one embodiment, the network function includes a Virtual Network Function (VNF), a Physical Network Function (PNF), or a combination thereof.
25 [0061] The processing engine(s) (210) is configured to perform at least one
action based on the determined type of request. In one embodiment, the at least one action is selected from search, view, analyze, generate reports, monitor specific error codes, service allocation, and resource management. Upon completion of the selected action(s), the processing engine(s) (210) generates processed data from the
17
data record associated with the network function. For instance, if the user (102)
initiates a request to search for network traffic patterns, the processing engine(s)
(210) may analyze data from the user equipment (UE) (104) to identify peak usage
times or potential bottlenecks. These peak usage times or potential bottlenecks are
5 examples of the processed data. The processed data is then stored in one or more
databases (220) for future reference and use.
[0062] In one embodiment, the processing engine comprises an ingestion
layer engine, and a normalization layer engine. The ingestion layer engine is
configured to ingest at least one type of the data received from the receiving unit
10 and generate an ingested data. The normalization layer engine is configured for
normalizing the ingested data. In one embodiment, the normalization layer engine is configured to provide the normalized data to an analysis engine, a correlation engine, a service quality manager, and a streaming engine.
[0063] Further, a message broker platform is configured to receive the
15 normalized data and manage real-time data streams catering to a plurality of users.
The processing engine comprises a scheduling layer engine that is configured to enable the plurality of users to execute a plurality of tasks based on user-configured intervals corresponding to the real-time data streams managed by the message broker platform.
20 [0064] In one embodiment, the system is further configured to employ a
configuration management, an alarm management, and a counter management. In one embodiment, the system is further configured to employ a microservice-based architecture is configured to employ machine learning as a Service (MLaaS) to enable a plurality of operations within a distributed data lake.
25 [0065] In one embodiment, the data generated by the analysis engine, the
correlation engine, the service quality manager, and the streaming engine is a geographic location-based data to be presented over a map on a Graphic User Interface (GUI) by a mapping layer engine.
18
[0066] In one embodiment, the system includes a forecasting engine that is
configured to generate forecasts depicting future trends and outcomes by analysing the processed data.
[0067] In one embodiment, the processing engine is configured as a parallel
5 computing framework that provides the execution of computing tasks.
[0068] In an aspect, the processing unit(s) (210) may be configured to send
the processed data to the database (220).
[0069] In an embodiment, the database (220) may include any computer-
readable medium known in the art including, for example, volatile memory, such
10 as Static Random Access Memory (SRAM) and Dynamic Random Access Memory
(DRAM) and/or non-volatile memory, such as Read Only Memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
[0070] Although FIG. 2 shows an exemplary block diagram (200) of the
15 system (108), in other embodiments, the system (108) may include fewer
components, different components, differently arranged components, or additional functional components than depicted in FIG. 2. Additionally, or alternatively, one or more components of the (108) may perform functions described as being performed by one or more other components of the system (108).
20 [0071] FIG. 3 illustrates a connection-level diagram representing
interconnections of various components of the system (108) in accordance with an embodiment of the present disclosure. The system (108) relates to performing dynamic service and resource management in the network environment and encompasses various functionalities, ranging from proactive, reactive, and adaptive
25 monitoring to internode correlation and log analysis.
[0072] The system (108) mainly includes, but may not be limited to, a
Graphical User Interface (GUI) (314) communicatively coupled to an edge load
19
balancer microservice (312), an Element Management System (EMS) (313)
communicatively coupled to a microservice registry manager (315), a processor
(316), and a controller (318). The microservice registry manager (315), the edge
load balancer microservice (312), the controller (118) and the processor (116) may
5 be communicatively coupled to each other.
[0073] The system (108) may integrate various components that work
together to provide a comprehensive solution for network management. The GUI
(314) may serve as an access point of the user (102) to the system (108), offering
an interactive and visual interface to monitor and control the various elements of
10 the network (106). The user (102) may observe system performance, receive
notifications, and interact with the system (108) for tasks such as troubleshooting, configuration, and report generation using the GUI (314).
[0074] The EMS (313) may be configured to manage network elements at
the granular level. The EMS (313) communicates with the microservice registry
15 manager (315), which likely maintains a registry of various microservices available
in the system (108). The EMS (313) may be responsible for various tasks, such as monitoring the status of network elements, executing configuration changes, and handling alarms and performance data.
[0075] The processor (316) may be configured to execute the instructions
20 and process the data necessary for the operation of the entire system. The processor
(316) may perform computations, run analytics, and ensure that tasks are carried out efficiently.
[0076] The controller (318) may be configured to orchestrate the
interactions between different parts of the system (108). The controller (318) may
25 handle tasks like routing data, managing workflows, and ensuring that the system
(108) responds correctly to inputs given by the user (102).
[0077] The edge load balancer microservice (312) may be configured to
distribute network traffic and requests efficiently across various servers or services.
20
[0078] Each component of the system (108), the microservice registry
manager (315), the edge load balancer microservice (312), the controller (318), and
the processor (316), are communicatively coupled, indicating a high degree of
interoperability and information exchange. This architecture allows for a seamless
5 operation where changes in one part of the system (108) are immediately known
and responded to by others, maintaining the integrity and performance of the system (108).
[0079] In one exemplary embodiment of the system (108), the EMS (313)
may provide the necessary functions to manage Network Elements (NEs) on the
10 Network Element-Management Layer (NEL) of the Telecommunications
Management Network (TMN) model.
[0080] Further, the system (108) may include functionalities for proactive,
reactive, and adaptive monitoring, configuration management, performance
management, and various other operational tasks. The EMS (313) may be
15 configured for the management of specific types of network elements, like the
Virtual Network Functions (VNFs) or Physical Network Functions (PNFs) mentioned.
[0081] In an embodiment, the EMS (313) may be configured to interact with
various components such as the data ingestion layer, normalization layer,
20 processing units, and service quality manager, among others, to manage the
network elements effectively. The EMS (313) may be configured to collect and analyze performance data, fault management, configuration, accounting, performance, and security management, and respond to network events and alarms, as outlined by the correlation engine and anomaly detection functionalities within
25 the system (108).
[0082] Further, the system (108) may be a disaggregated and cloud-native
data lake platform tailored for operators to enable smarter operations through Machine Learning (ML) as a Service (MLaaS). In particular, the system (108) may
21
encompass a range of functionalities ranging from pro-active, reactive, and adaptive monitoring to inter-node correlation and log analysis.
[0083] Further, the processor (316), may be capable of pulling in data
records of different virtualized network elements running as virtual machines in the
5 cloud. These data records may be of different data formats which may be sanitized
and normalized before being used for analysis. Dynamic workflows and tasks
provisioned on the fly may adapt themselves to the data sources and churn out
proactive and reactive notifications for different user-defined scenarios. Further,
proactive notifications and alerts may be generated on the basis of pre-configured
10 policies. Furthermore, the system may provide numerous ways for the user (102) to
search, view, analyze, generate reports, monitor specific error codes, and tons of other information that may be coming in the data records of the VNFs or PNFs.
[0084] FIG. 4(a) and 4(b) illustrate an exemplary detailed system
architecture (400) in accordance with an embodiment of the present disclosure.
15 [0085] The system (108) may include an ingestion layer engine (432), a
normalization layer engine (434), and one or more sub-systems.
[0086] The ingestion layer engine (432) may be configured to define an
environment that may be capable of consuming various types of incoming data,
such as alarms, counter, configuration, data records, infra-metric data, logs, and
20 inventory data. The ingestion layer engine (432) may gather data and forward it to
the data processing systems. The ingestion layer engine (432) may process incoming data, validate data, and route it to the normalization layer engine (434), streaming engine, streaming analytics engine, and a message broker platform (464) based on the requirements for further analytics.
25 [0087] The normalization layer engine (434) may normalize, enrich, and
store data received from the ingestion layer engine (432) in a database (220 ref. Fig. 2). The normalization layer engine (434) may insert normalized data into various databases such as, but not limited to, a distributed data lake (DDL) (462), a caching
22
layer engine (446), and graph layer module (468). Further, the normalization layer
engine (434) may produce data for the message broker platform (464). The
normalization layer engine (434) may also be responsible for providing the
normalized data to another sub-system. These sub-systems may include, but not be
5 limited to, an analysis engine (452), a correlation engine (444), a service quality
management (480), and a streaming engine (456).
[0088] Further, a 5G probe(s) (436) plays a crucial role. The 5G probe(s)
(436) comprises of 5G machine learning (ML) probe, a 5G real-time conductors and 5G fulcrums. The 5G probe(s) (436) is designed as a software-based solution,
10 meaning the probing logic (network taps and probes) may be embedded within the
NF business logic, eliminating the need for physical probes. The probing solution requires only summarized data records from NFs to generate analytics. The vProbe solution incorporates a probing agent that collects probing data, specifically Streaming Data Records (SDR), from a 4G/5G combo-core network nodes. These
15 records are generated by NFs in case of any network failure conditions. The records
are streamed in real-time into the 5G probe(s) (436) aggregation layer and then reach the analysis engine (452). The analysis engine (452) normalizes and enriches the data, creating reports using reporting tools, which further aids in overall network troubleshooting and root cause analysis.
20 [0089] In an embodiment, a computation layer (438) acts as a security
checkpoint for requests originating from external systems. The computation layer (438) receives and manages requests submitted by external systems. The computation layer (438) strictly controls access to internal resources or data by verifying that each request is authorized. This ensures only authorized external
25 systems can interact with the system (108).
[0090] In an embodiment, the message broker platform (464) is a publish-
subscribe messaging system that manages and maintains the real-time stream of data from different applications. The message broker platform (464) may act as a central hub for message exchange between different components within the system
23
(108). The message broker platform (464) may enable communication between
producers and the users (102) using message-based topics. The message broker
platform (464) is designed for high-end new-generation distributed applications and
permits a large number of permanent or ad-hoc users. Further, the message broker
5 platform (464) may rely on the file system for storage and caching purposes. Thus,
the message broker platform (464) is fast, prevents data loss, and is fault-tolerant.
[0091] Further, the graph layer module (468) may include a relationship
modeler, which may be capable of modelling the alarm, counter, configuration, data records, infra-metric data, 5G probe data, and inventory data as captured by the
10 ingestion layer engine (432). The graph layer module (468) may build the
relationship among the various type of data provision. For example, the modeler may model the alarm and the counter data or Vprobe and alarm data and their relationship with each other. Further, the modeler may process the steps provisioned in the model and provide the outcome to a requested system(s), where
15 the system(s) may be a parallel computing system, a workflow engine, a query
engine, a correlation system, a 5G performance management engine, and a 5G Key Performance Indicator (KPI) engine.
[0092] In an embodiment, the scheduling layer module (478) may be
configured to a task(s) at a pre-defined intervals of time configured as per the choice
20 of the user (102). A task may be an activity, that performs a service call, an
Application Programming Interface (API) call to another microservice, an execution of an elastic search query and storing its output in the DDL (462) or a Distributed File System (476) (DFS) or sending it to another micro-service. The scheduling layer module (478) may also facilitate graph traversals via a mapping
25 layer engine (440) to execute tasks.
[0093] Further, the objective for designing the analysis engine (452) may
be to define the environment where the workflow, i.e. a set of tasks for any use-cases may be configured and executed to debug or for a better understanding of the call flow. A user (102) may also query the data coming from different sub-systems
24
or external gateway for a better overview of the data or to identify the actual issue present in the data. The user (102) may also configure the set of policies through which the user (102) may identify the anomaly present on the data and receive a notification once a policy is breached or some abnormal behaviour occurs.
5 [0094] In an embodiment, a parallel computing framework (474) may
provide a simple but sophisticated interface to execute computing tasks in parallel.
The user (102) may either provide the DFS (476) locations or the DDL (462) indices
for input data. The parallel computing framework (474) may support creating
chains of tasks by connecting to a Sub-parallel computing framework System
10 (SCM). Each of the tasks in a workflow may be executed sequentially whereas
multiple chains may be executed in parallel. The parallel computing framework (474) may support allocating specific lists of hosts for different computing tasks.
[0095] Further, the distributed file system (476) may be a file system that
allows the user (102) to have access to data and supports different operations. Each
15 data file may be partitioned into several parts called chunks.
[0096] Furthermore, load balancing may refer to efficiently distributing
incoming network traffic across a group of backend servers, also known as a server
farm or server pool. A load balancer (448) may route all the traffic on the servers
and send the client requests across the microservices capable of fulfilling those
20 requests. In some embodiments, the load balancer (448) may route the request based
on round robin scheduling, header-based request dispatch, and context-based request dispatch, but not limited to the like. In header-based request dispatch, the load balancer (448) may handle event and event acknowledgment and forward the request/response to the specific microservice which has requested for the event.
25 [0097] In an aspect, the objective of the streaming engine (456) may be the
creation of fast-paced streaming pipelining to the GUI (314). The streaming engine (456) may receive the data from connected sub-systems and stream the received data to GUI (314) in support of the DDL (462), the message broker platform (464), and the caching layer engine (446). The stream analytics engine or the streaming
25
engine (456) may receive data from the sub-systems and perform the required computation on data in real-time, followed by sending it to the GUI (314).
[0098] Further, the objective for designing an integrated performance
management engine (450) that may be configured to provide the platform that may
5 use all the requirements related to performance counters. The requirement may be
related to visualizing the performance counters of a particular node, creating and
analyzing the KPIs, creating counter/KPI reports consisting of single or multiple
nodes with multiple levels of aggregation, and the like. In some embodiments, the
integrated performance management engine (450) may maintain the performance
10 counters and KPIs of the network elements. The 5G performance management
engine may gather and process performance counter data from different data sources, and based on the aggregation required, store the network performance data and the KPI engine responsible for managing all the KPI of all the network elements.
15 [0099] Furthermore, the objective for designing the reporting engine (460)
may be configured to create the report view layout of API according to requirement of the user (102) and objective of a notification engine may be to send the report to the user (102) by email according to client requirement. In some embodiments, the reporting engine (460) may create the report according to a dashboard that the user
20 (102) may create from the GUI (314) according to the user (102) requirement(s).
The reporting engine (460) may process the data from different interfaces and create report in, for example, excel format.
[00100] In an aspect, an anomaly detection module (466) may be configured
to notify the analysis engine (452) to create a policy for a selected algorithm or
25 model to determine anomalies in the KPIs. Once this provision is set, the user (102)
start receiving machine learning reports on a scheduled basis. Additionally, the anomaly detection module (466) utilizes data normalized by the normalization layer engine (434) received from the DDL (462) for model creation and prediction purposes. In some embodiments, the anomaly detection module (466) supports
26
model chaining, allowing the user (102) to link similar models to identify anomalies
within the same data set. The models with the same time unit can be chained for
comparative analysis of anomaly detection tasks. Moreover, the anomaly detection
module (466) includes a model comparison feature, enabling the user (102) to
5 compare outputs from different algorithms. This helps the user (102) to select the
best-performing algorithm based on accuracy and performance. Furthermore, the anomaly detection module (466) may provide report management capabilities, allowing the users (102) to export data from selected models using various filters and granularity levels for further optimization. The anomaly detection module
10 (466) also integrates a streaming engine, enabling users (102) to identify anomalies
in real-time data streams. In some embodiments, the anomaly detection module (466) includes a statistics management function, which allows the user (102) to view model statistics categorized by algorithms. The anomaly detection module (466) may have a wide range of prediction algorithms and can automatically select
15 the most suitable algorithm for the given data set. Additionally, the anomaly
detection module (466) allows the user (102) to set thresholds for the predicted values, providing further customization and control over the anomaly detection process.
[00101] In an aspect, a forecasting engine (470) may provide a simple and
20 sophisticated interface to generate forecasts with high accuracy and precision. The
architecture may be scalable and fault-tolerant. The user (102) may either upload
data directly in a CSV format or upload it on a server and provide its path to the
service. The forecasting engine (470) may support multiple data visualization
techniques. The user (102) may analyse and clean the dataset. If the dataset has an
25 inherent hierarchy, the user (102) may divide the dataset and create sub-models.
The user (102) may select their preferred algorithm that suits their needs and tweak
its parameters. In some embodiments, the user (102) may also add external
variables to improve the accuracy of the model. The execution of the model may be
done in the background, and the user (102) may be notified when it is ready. The
30 user (102) may view the predictions and export them for offline use.
27
[00102] Further, the integrated performance management engine (450), the
analysis engine (452), and the correlation engine (444) and their sub-systems may
have geographic location-based data that need to be presented over a map on the
GUI (314). The mapping layer engine (440) may provide map data to the integrated
5 performance engine, analysis engine (452), and the correlation engine (444) sub-
systems to show respective data over the map on the GUI (314). In some
embodiments, the integrated performance engine may use map data to show
performance counter, KPI information, etc., on the map. The analysis engine (452)
may use the map data to show alarms specific to a location on the map. Further, the
10 correlation engine (444) may use the map data to show location-wise alarms,
counters, and metrics on the map. Furthermore, the map data may be stored in the distributed file system (476) to be used by the mapping layer engine (440).
[00103] In an aspect, the script engine (472) may provide a simple interface
to manage and execute python scripts. The script engine (472) may be scalable and
15 fault-tolerant. The script engine (472) instances may fetch the input in CSV format
from a common location on the distributed file system (476). The script engine (472) may support multiple data pre-processing and cleaning techniques. The user (102) may analyse and manipulate the dataset.
[00104] In some embodiments, an API Gateway (454) may take all API calls
20 from clients, and route them to the appropriate microservice with request routing,
composition, and protocol translation. Typically, the API Gateway (454) may handle a request by invoking multiple microservices and aggregating the results to determine the best path. The API Gateway (454) may translate between web protocols and web-unfriendly protocols that may be used internally.
25 [00105] Further, for most microservices-based applications, the API
Gateway (454) may be implemented because the API Gateway (454) acts as a single-entry point into the system (108). The API Gateway (454) may be responsible for request routing, composition, and protocol translation, and may streamline the system. The API Gateway (454) may handle some requests by simply
28
routing them to the appropriate backend service, and handle others by invoking multiple backend services and aggregating the results. If there are failures in the backend services, the API Gateway (454) may mask them by returning cached or default data.
5 [00106] In an aspect, the DDL (462) may make organizational data from
different sources, accessible to a variety of end users like business analysts, data engineers, data scientists, product managers, executives, etc., in order to enable these personas to leverage insights in a cost-effective manner, for improved business performance. It may be noted that the DDL (462) is important in the
10 context of many kinds of applications. Whenever an application needs to store data
persistently and access this data regularly, the DDL (462) is required. The DDL (462) is all about storing large amounts of data, which may be structured, semi-structured, or unstructured, e.g. web server logs, NoSQL data, sensors, Internet of Things (IoT) data, and third-party data. The DDL (462) can either store the data in
15 the same format as its source systems or transform it before storing.
[00107] In an aspect, the objective for designing the service quality manager
(480) may be to define the environment where workflows may be configured for services provided to the user (102) such as Voice, Messaging, Wi-Fi, FTTx, etc. and also execute these workflows whenever the user (102) complaints for service
20 outage. In some embodiments, the service quality manager (480) may maintain the
status of customer services. The monitoring tool in the service quality manager (480) may gather and process data from different data sources and based on the fine-grained analysis of the network data collected and KPIs, the service quality manager (480) may provide the service status per the user (102) for a desired
25 service.
[00108] Further, an operation and management engine (430) may manage all
Fault, Configuration, Alarm, Performance, and Security (FCAPS) of all the system nodes and service engines. Through FCAPS, microservices may monitor its faults, configurations, performance counters, etc. Alarms may be the events that arise on
29
certain conditions, for example, when database server initialization fails, etc., the
operation and management engine (430) may signify the specific application
parameters used by the system (108). Performance counters may be the values that
get incremented when a particular event occurs in the system (108), representing
5 the success/failure of that event.
[00109] In an aspect, a correlation engine (444) may be used in systems
management tools to aggregate, normalize, and analyse event log data, using
predictive analytics and fuzzy logic to alert the systems administrator when there is
a problem. The correlation engine (444) may perform correlation and cross-
10 correlation based on rules, policies, and machine learning such that rules may be
pre-defined to detect patterns. The correlation engine (444) may be continuously
enhanced and customized to the needs of the user (102). Policies may be used to
verify if certain actions happen at the right time and place. Machine learning may
include the abilities of the correlation engine (444) to learn and differentiate
15 between normal and abnormal states as well as to detect changes in the behaviour
of applications, servers, and other areas of a network.
[00110] Although FIG. 4 shows an exemplary block diagram (400) of the
system (108), in other embodiments, the system (108) may include fewer
components, different components, differently arranged components, or additional
20 functional components than depicted in FIG. 4. Additionally, or alternatively, one
or more components of the system (108) may perform functions described as being performed by one or more other components of the system (108).
[00111] FIG. 5 illustrates a method (500) for performing dynamic service and
resource management in a network.
25 [00112] At step 502, the method (500) is configured to receive, by a receiving
unit (208), at least one request from a user (102), wherein the request is for a data records network function. This request pertains to a network function that involves processing or accessing network data records.
30
[00113] At step 504, the method (500) is configured to determine, by a
processing engine (210), a type of request received. The processing engine (210)
analyzes the received request (502) to identify the specific type of network data
function requested by the user. This involves tasks like searching for specific data,
5 analyzing network performance metrics, or generating report.
[00114] At step 506, the method (500) is configured to perform at least one
action based on the determined type of request. The at least one action may include
retrieving the requested data from the database(s) (220), processing or transforming
the retrieved data for user consumption (e.g., generating reports, filtering data based
10 on specific criteria), or delivering the processed data to the interface (206).
[00115] At step 508, the method (500) is configured to generate a processed
data on completion of the at least one action.
[00116] At step 510, the method (500) is configured to store, by a plurality
of databases (220), the processed data resulting from the actions performed in step
15 506 is stored within the plurality of databases (220) within the system (108). This
allows for future access and analysis of the data.
[00117] In an aspect, the method includes ingesting, by an ingestion layer
engine (432), at least one type of the data record received from the network function and generating an ingested data. The method includes normalizing, by a
20 normalization layer engine (434) the ingested data. The method includes receiving,
by a message broker platform (464), the normalized data and managing real-time data streams catering to a plurality of users (102). The method includes executing, by scheduling layer engine (478), a plurality of tasks based on user-configured intervals corresponding to the real-time data streams managed by the message
25 broker platform (464).
[00118] In an aspect, the network function includes a Virtual Network
Function (VNF), a Physical Network Function (PNF), or a combination thereof.
31
[00119] In an aspect, the at least one action is selected from search, view,
analyze, generate reports, monitor specific error codes, service allocation, and resource management.
[00120] In an aspect, the method includes employing a configuration
5 management, an alarm management, and a counter management.
[00121] In an aspect, the method includes a microservice-based architecture
configured to employ machine learning as a Service (MLaaS) to enable a plurality of operations within a distributed data lake (462).
[00122] In an aspect, the method includes providing, by the normalization
10 layer engine (434), the normalized data to an analysis engine (452), a correlation
engine (444), a service quality manager (480), and a streaming engine (456).
[00123] In an aspect, the method includes generating, by a forecasting engine
(470), forecasts depicting future trends and outcomes by analysing the processed data.
15 [00124] In an exemplary aspect, the present disclosure discloses a user
equipment communicatively coupled with the network (106). The coupling comprises of receiving, by a receiving unit, at least one request from a user, determining by a processing engine, a type of the at least one received request that is indicative of a type of action that is to be performed on a data record
20 corresponding to/associated with a network function in responding to the request,
performing, by the processing engine, at least one action based on the determined type of request, generating, by the processing engine, a processed data on completion of the at least one action, and storing, in a plurality of database(s), the processed data.
25 [00125] FIG. 6 is an illustration (600) of a non-limiting example of details of
computing hardware used in the system (108), in accordance with an embodiment of the present disclosure. As shown in FIG. 2, the system (108) may include an
32
external storage device (610), a bus (620), a main memory (630), a read-only
memory (640), a mass storage device (650), a communication port (660), and a
processor (670). A person skilled in the art will appreciate that the system (108)
may include more than one processor (670) and communication ports (660).
5 Processor (670) may include various modules associated with embodiments of the
present disclosure.
[00126] In an embodiment, the communication port (660) is any of an RS-
232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a
Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or
10 other existing or future ports. The communication port (660) is chosen depending
on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the system (108) connects.
[00127] In an embodiment, the memory (630) is Random Access Memory
(RAM), or any other dynamic storage device commonly known in the art. Read-
15 only memory (640) is any static storage device(s) e.g., but not limited to, a
Programmable Read Only Memory (PROM) chips for storing static information
e.g., start-up or Basic Input/Output System (BIOS) instructions for the processor
(670).
[00128] In an embodiment, the mass storage (650) is any current or future
20 mass storage solution, which is used to store information and/or instructions.
Exemplary mass storage solutions include, but are not limited to, Parallel Advanced
Technology Attachment (PATA) or Serial Advanced Technology Attachment
(SATA) hard disk drives or solid-state drives (internal or external, e.g., having
Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs,
25 Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks (e.g.,
SATA arrays).
[00129] In an embodiment, the bus (620) communicatively couples the
processor(s) (270) with the other memory, storage, and communication blocks. The bus (620) is, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-
33
X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB) or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (670) to the system (108).
5 [00130] Optionally, operator and administrative interfaces, e.g., a display,
keyboard, joystick, and a cursor control device, may also be coupled to the bus
(620) to support direct operator interaction with the system (108). Other operators
and administrative interfaces are provided through network connections connected
through the communication port (660). The components described above are meant
10 only to exemplify various possibilities. In no way should the aforementioned
exemplary illustration (600) limit the scope of the present disclosure.
[00131] The method and system of the present disclosure may be
implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any
15 combination of software, hardware, and firmware. The above-described order for
the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs
20 including machine-readable instructions for implementing the methods according
to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
[00132] While the foregoing describes various embodiments of the present
25 disclosure, other and further embodiments of the present disclosure may be devised
without departing from the basic scope thereof. The scope of the present disclosure is determined by the claims that follow. The present disclosure is not limited to the described embodiments, versions, or examples, which are included to enable a person having ordinary skill in the art to make and use the present disclosure when
34
combined with information and knowledge available to the person having ordinary skill in the art.
[00133] The present disclosure discloses a system and method for performing
dynamic service and resource management in a network. This advancement
5 addresses the limitations of existing static network management solutions by
enabling real-time allocation of resources based on current demands. The disclosure
involves a system with a processing engine that analyzes user requests and network
data. This offers significant improvements in efficiency and adaptability for
network operations. By implementing features like machine learning and parallel
10 processing, the disclosed invention enhances network troubleshooting, resource
allocation, and service delivery, resulting in improved network performance and reduced costs.
ADVANTAGES OF THE PRESENT DISCLOSURE
15 [00134] The present disclosure leverages machine learning to find
anomalous network patterns and creates reports and alerts based on the events.
[00135] The present disclosure helps in proactive issue root cause analysis
and resolution before a network symptom affects operations.
[00136] The present disclosure helps in operational insights, data binding, as
20 well as correlation without writing a single line of code.
[00137] The present disclosure provides auto-triggering of workflows and
organizational assignment of tasks, bringing transparency and resolution.
[00138] The present disclosure helps in automating workflow steps through
artificial intelligence and machine learning. 25
35
WE CLAIM:
1. A system (108) for performing dynamic service and resource management
in a network, the system (108) comprising:
a receiving unit (208) configured to receive at least one request from a user (102);
a processing engine (210) configured to:
determine a type of the at least one received request that is indicative of a type of action that is to be performed on a data record corresponding to/associated with a network function in responding to the at least one request;
perform at least one action based on the determined type of request;
generate a processed data on completion of the at least one action; and
a plurality of database(s) (220) configured to store the processed data.
2. The system (108) as claimed in claim 1, wherein the processing engine (210)
comprises:
an ingestion layer engine (432) configured to ingest at least one type of the data record received from the network function and generate an ingested data;
a normalization layer engine (434) configured for normalizing the ingested data;
a message broker platform (464) configured to receive the normalized data and manage real-time data streams catering to a plurality of users (102); and
a scheduling layer engine (478) configured to enable the plurality of users (102) to execute a plurality of tasks based on user-
configured intervals corresponding to the real-time data streams managed by the message broker platform (464).
3. The system (108) as claimed in claim 1, wherein the network function includes a Virtual Network Function (VNF), a Physical Network Function (PNF), or a combination thereof.
4. The system (108) as claimed in claim 1, wherein the at least one action is selected from search, view, analyze, generate reports, monitor specific error codes, service allocation, and resource management.
5. The system (108) as claimed in claim 1, is further configured to employ a configuration management, an alarm management, and a counter management.
6. The system (108) as claimed in claim 1, further comprises a microservice-based architecture configured to employ machine learning as a Service (MLaaS) to enable a plurality of operations within a distributed data lake (462).
7. The system (108) as claimed in claim 2, wherein the normalization layer engine (434) is configured to provide the normalized data to an analysis engine (452), a correlation engine (444), a service quality manager (480), and a streaming engine (456).
8. The system (108) as claimed in claim 7, wherein the data generated by the analysis engine (452), the correlation engine (444), the service quality manager (480), and the streaming engine (456) is a geographic location-based data to be presented over a map on a Graphic User Interface (GUI) (314) by a mapping layer engine (440).
9. The system (108) as claimed in claim 1, includes a forecasting engine (470) configured to generate forecasts depicting future trends and outcomes by analysing the processed data.
10. The system (108) as claimed in claim 1, wherein the processing engine (210) is configured as a parallel computing framework (474) that provides parallel execution of computing tasks.
11. A method (500) for performing dynamic service and resource management in a network, the method (500) comprising:
receiving (502), by a receiving unit (208), at least one request from a user (102);
determining (504), by a processing engine (210), a type of the at least one received request that is indicative of a type of action that is to be performed on a data record corresponding to/associated with a network function in responding to the at least one request;
performing (506), by the processing engine (210), at least one action based on the determined type of request;
generating (508), by the processing engine (210), a processed data on completion of the at least one action; and
storing (510), in a plurality of database(s) (220), the processed data.
12. The method (500) as claimed in claim 11, further comprising steps of:
ingesting, by an ingestion layer engine (432), at least one type of the data record received from the network function and generating an ingested data;
normalizing, by a normalization layer engine (434) the ingested data;
receiving, by a message broker platform (464), the normalized data and managing real-time data streams catering to a plurality of users (102); and
executing, by scheduling layer engine (478), a plurality of tasks based on user-configured intervals corresponding to the real-time data streams managed by the message broker platform (464).
13. The method (500) as claimed in claim 11, wherein the network function includes a Virtual Network Function (VNF), a Physical Network Function (PNF), or a combination thereof.
14. The method (500) as claimed in claim 11, wherein the at least one action is selected from search, view, analyze, generate reports, monitor specific error codes, service allocation, and resource management.
15. The method (500) as claimed in claim 11, further comprising employing a configuration management, an alarm management, and a counter management.
16. The method (500) as claimed in claim 11, further comprising employing a microservice-based architecture configured to employ machine learning as a Service (MLaaS) to enable a plurality of operations within a distributed data lake (462).
17. The method (500) as claimed in claim 11, further comprising providing, by the normalization layer engine (434), the normalized data to an analysis engine (452), a correlation engine (444), a service quality manager (480), and a streaming engine (456).
18. The method (500) as claimed in claim 11, further comprising generating, by a forecasting engine (470), forecasts depicting future trends and outcomes by analysing the processed data.
19. The method (500) as claimed in claim 11, further comprising configuring the processing engine (210) as a parallel computing framework (474) that provides parallel execution of computing tasks.
20. A user equipment (UE) (104) communicatively coupled with a network (106), the coupling comprises steps of:
receiving, by a receiving unit, at least one request from a user (102);
determining, by a processing engine (210), a type of the at least one received request that is indicative of a type of action that is to be performed on a data record corresponding to/associated with a network function in responding to the at least one request;
performing, by the processing engine (210), at least one action based on the determined type of request; and
generating, by the processing engine (210), a processed data on completion of the at least one action; and
storing, in a plurality of database(s) (220), the processed data.
| # | Name | Date |
|---|---|---|
| 1 | 202321047450-STATEMENT OF UNDERTAKING (FORM 3) [14-07-2023(online)].pdf | 2023-07-14 |
| 2 | 202321047450-PROVISIONAL SPECIFICATION [14-07-2023(online)].pdf | 2023-07-14 |
| 3 | 202321047450-FORM 1 [14-07-2023(online)].pdf | 2023-07-14 |
| 4 | 202321047450-DRAWINGS [14-07-2023(online)].pdf | 2023-07-14 |
| 5 | 202321047450-DECLARATION OF INVENTORSHIP (FORM 5) [14-07-2023(online)].pdf | 2023-07-14 |
| 6 | 202321047450-FORM-26 [13-09-2023(online)].pdf | 2023-09-13 |
| 7 | 202321047450-POA [29-05-2024(online)].pdf | 2024-05-29 |
| 8 | 202321047450-FORM 13 [29-05-2024(online)].pdf | 2024-05-29 |
| 9 | 202321047450-AMENDED DOCUMENTS [29-05-2024(online)].pdf | 2024-05-29 |
| 10 | 202321047450-Power of Attorney [04-06-2024(online)].pdf | 2024-06-04 |
| 11 | 202321047450-Covering Letter [04-06-2024(online)].pdf | 2024-06-04 |
| 12 | 202321047450-ORIGINAL UR 6(1A) FORM 26-120624.pdf | 2024-06-20 |
| 13 | 202321047450-ENDORSEMENT BY INVENTORS [03-07-2024(online)].pdf | 2024-07-03 |
| 14 | 202321047450-DRAWING [03-07-2024(online)].pdf | 2024-07-03 |
| 15 | 202321047450-CORRESPONDENCE-OTHERS [03-07-2024(online)].pdf | 2024-07-03 |
| 16 | 202321047450-COMPLETE SPECIFICATION [03-07-2024(online)].pdf | 2024-07-03 |
| 17 | Abstract-1.jpg | 2024-08-06 |
| 18 | 202321047450-CORRESPONDENCE(IPO)-(WIPO DAS)-06-08-2024.pdf | 2024-08-06 |
| 19 | 202321047450-FORM 18 [26-09-2024(online)].pdf | 2024-09-26 |
| 20 | 202321047450-FORM 3 [04-11-2024(online)].pdf | 2024-11-04 |