Sign In to Follow Application
View All Documents & Correspondence

Method And System For Performing Dynamic Resource Management In A Communication Network

Abstract: The present disclosure relates to a method and system for performing dynamic resource management in a communication network. The method includes receiving, by a transceiver unit [302] at a CMP microservice [304], a breached response output comprising a resource threshold limit exceed event. Next, the method includes transmitting, by the transceiver unit [302] from the CMP microservice [304], the resource threshold limit exceed event to an NPDA microservice [306]. Next, the method includes initiating, by an initiating unit [308] at the NPDA microservice [306], a hysteresis process in response to the resource threshold limit exceed event, wherein the hysteresis process is performed on resource constraints. Next, the method includes triggering, by a trigger unit [310] at the NPDA microservice [306], one of scale in or scale out actions associated with network resources, based on the hysteresis process, to manage the network resources. [FIG. 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 September 2023
Publication Number
12/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Ankit Murarka
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Rizwan Ahmad
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Kapil Gill
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Arpit Jain
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Shashank Bhushan
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Jugal Kishore
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Meenakshi Sarohi
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
9. Kumar Debashish
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
10. Supriya Kaushik De
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
11. Gaurav Kumar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
12. Kishan Sahu
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
13. Gaurav Saxena
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
14. Vinay Gayki
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
15. Mohit Bhanwria
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
16. Durgesh Kumar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
17. Rahul Kumar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR PERFORMING DYNAMIC RESOURCE MANAGEMENT IN A COMMUNICATION NETWORK”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

METHOD AND SYSTEM FOR PERFORMING DYNAMIC RESOURCE MANAGEMENT IN A COMMUNICATION NETWORK
FIELD OF INVENTION
[0001] The present disclosure generally relates to field of wireless communication systems. More particularly, embodiments of the present disclosure relate to a method and a system for performing dynamic resource management in a communication network.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past few decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on analog technology and offered only voice services. However, with the advent of the second-generation (2G) technology, digital communication and data services became possible, and text messaging was introduced. 3G technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security. Currently, the fifth generation (5G) technology is being deployed, promising even faster data speeds, low latency, and the ability to connect multiple devices simultaneously. With each generation, wireless communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0004] Over the last few years, the demand for scalable and resilient software systems has increased significantly. Additionally, microservices architecture has emerged as a popular approach for designing and deploying complex applications due to its flexibility and modularity. However, managing resources and ensuring optimal system functionality in a microservices environment can be challenging. For example, in microservices-based systems, the accurate transmission of responses, especially those that breach predefined resource

threshold limits, is critical. This further leads to system overload when resource constraints are not appropriately managed. Furthermore, this can lead to performance degradation and service disruptions, ultimately affecting user experience.
[0005] Further, over the period of time various solutions have been developed to address resource management and system scalability in microservices architecture. However, there are certain challenges with the existing solutions. For example, the existing solutions often involve static resource allocation, manual intervention, or rudimentary scaling strategies.
[0006] Thus, there exists an imperative need in the art to provide a method and a system to address the challenges associated with resource management and system overload prevention in microservices architecture, which the present disclosure aims to address.
SUMMARY
[0007] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0008] An aspect of the present disclosure may relate to a method for performing dynamic resource management in a communication network. The method includes receiving, by a transceiver unit at a capacity management platform (CMP) microservice, a breached response output comprising a resource threshold limit exceed event. The method further comprises transmitting, by the transceiver unit from the CMP microservice, the resource threshold limit exceed event to a network function virtualization platform decision analytics (NPDA) microservice. The method further includes initiating, by an initiating unit at the NPDA microservice, a hysteresis process in response to the resource threshold limit exceed event, wherein the hysteresis process is performed on resource constraints. The method further includes triggering, by a trigger unit at the NPDA microservice, one of scale in or scale out actions associated with network resources, based on the hysteresis process, to manage the network resources.
[0009] In an exemplary aspect of the present disclosure, the breached response output is generated during an instantiation phase.

[00010] In an exemplary aspect of the present disclosure, the resource threshold limit
exceed event is triggered when usage of resource constraints exceeds a predetermined limit.
[00011] In an exemplary aspect of the present disclosure, the method further comprises
validating, by a validation unit, the resource constraints at the NPDA microservice.
[00012] In an exemplary aspect of the present disclosure, the resource constraints are
validated before triggering the one of scale in or scale out actions.
[00013] In an exemplary aspect of the present disclosure, the hysteresis process involves
a dynamic adjustment of resource allocation parameters in real-time.
[00014] In an exemplary aspect of the present disclosure, the network resources
comprise at least one of a virtual network function (VNF), a virtual network function component (VNFC), containerized network function (CNF), and a containerized network function component (CNFC).
[00015] In an exemplary aspect of the present disclosure, the resource constraints
comprise at least one from among a central processing unit (CPU), a Random Access Memory (RAM), a storage level.
[00016] Another aspect of the present disclosure may relate to a system for performing
dynamic resource management in a communication network. The system includes a transceiver unit configured to receive, at a capacity management platform (CMP) microservice, a breached response output comprising a resource threshold limit exceed event. The transceiver unit is further configured to transmit, from the CMP microservice, the resource threshold limit exceed event to a network function virtualization platform decision analytics (NPDA) microservice. The system further includes an initiating unit connected to at least the transceiver unit, wherein the initiating unit is configured to initiate, at the NPDA microservice, a hysteresis process in response to the resource threshold limit exceed event, wherein the hysteresis process is performed on resource constraints. The system further comprises a trigger unit connected to at least the initiating unit, wherein the trigger unit is configured to trigger, at the NPDA microservice, one of scale in or scale out actions associated with network resources, based on the hysteresis process, to manage the network resources.

[00017] Yet another aspect of the present disclosure may relate to a non-transitory
computer readable storage medium storing instructions for performing dynamic resource management in a communication network, the instructions include executable code which, when executed by one or more units of a system, causes a transceiver unit to receive, at a capacity management platform (CMP) microservice, a breached response output comprising a resource threshold limit exceed event. The executable code when executed further causes the transceiver unit to transmit, from the CMP microservice, the resource threshold limit exceed event to a network function virtualization platform decision analytics (NPDA) microservice. The executable code when executed further causes an initiating unit to initiate, at the NPDA microservice, a hysteresis process in response to the resource threshold limit exceed event, wherein the hysteresis process is performed on resource constraints. The executable code when executed further causes a trigger unit to trigger, at the NPDA microservice, one of scale in or scale out actions associated with network resources, based on the hysteresis process, to manage the network resources.
OBJECTS OF THE INVENTION
[00018] Some of the objects of the present disclosure, which at least one embodiment
disclosed herein satisfies are listed herein below.
[00019] It is an object of the present disclosure to provide a system and a method to
enable microservices to apply hysteresis principles for management and optimization of critical system resources.
[00020] It is another object of the present disclosure to provide a solution to perform
accurate transmission and handling of resources such as a virtual network function (VNF), a virtual network function component (VNFC), a containerized network function (CNF), and a containerized network function component (CNFC), based on predefined threshold limits of system constraints (such as CPU, RAM).
DESCRIPTION OF THE DRAWINGS
[00021] The accompanying drawings, which are incorporated herein, and constitute a
part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon

clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
[00022] FIG. 1 illustrates an exemplary block diagram representation of 5th generation
core (5GC) network architecture.
[00023] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
[00024] FIG. 3 illustrates an exemplary block diagram of a system for performing
dynamic resource management in a communication network, in accordance with exemplary implementations of the present disclosure.
[00025] FIG. 4 illustrates a method flow diagram for performing dynamic resource
management in a communication network.in accordance with exemplary implementations of the present disclosure.
[00026] FIG. 5 illustrates an exemplary block diagram of a system architecture for
performing dynamic resource management in a communication network, in accordance with exemplary implementations of the present disclosure.
[00027] FIG. 6 illustrates a process flow diagram for performing dynamic resource
management in a communication network.in accordance with exemplary implementations of the present disclosure.
[00028] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
DETAILED DESCRIPTION

[00029] In the following description, for the purposes of explanation, various specific
details are set forth in order to provide a thorough understanding of embodiments of the present
disclosure. It will be apparent, however, that embodiments of the present disclosure may be
practiced without these specific details. Several features described hereafter may each be used
5 independently of one another or with any combination of other features. An individual feature
may not address any of the problems discussed above or might address only some of the problems discussed above.
[00030] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing
10 description of the exemplary embodiments will provide those skilled in the art with an enabling
description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[00031] Specific details are given in the following description to provide a thorough
15 understanding of the embodiments. However, it will be understood by one of ordinary skill in
the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[00032] Also, it is noted that individual embodiments may be described as a process
20 which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or
a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
25 [00033] The word “exemplary” and/or “demonstrative” is used herein to mean serving
as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary
30 structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent
that the terms “includes,” “has,” “contains,” and other similar words are used in either the
7

detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
[00034] As used herein, a “processing unit” or “processor” or “operating processor”
5 includes one or more processors, wherein processor refers to any logic circuitry for processing
instructions. A processor may be a general-purpose processor, a special purpose processor, a
conventional processor, a digital signal processor, a plurality of microprocessors, one or more
microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a
microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array
10 circuits, any other type of integrated circuits, etc. The processor may perform signal coding
data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
[00035] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a
15 smart-device”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless
communication device”, “a mobile communication device”, “a communication device” may
be any electrical, electronic and/or computing device or equipment, capable of implementing
the features of the present disclosure. The user equipment/device may include, but is not limited
to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital
20 assistant, tablet computer, wearable device or any other computing device which is capable of
implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
25 [00036] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a form
readable by a computer or similar machine. For example, a computer-readable medium
includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk
storage media, optical storage media, flash memory devices or other types of machine-
30 accessible storage media. The storage unit stores at least the data that may be required by one
or more units of the system to perform their respective functions.
8

[00037] As used herein “interface” or “user interface refers to a shared boundary across
which two or more separate components of a system exchange information or data. The
interface may also be referred to a set of rules or protocols that define communication or
interaction of one or more modules or one or more units with each other, which also includes
5 the methods, functions, or procedures that may be called.
[00038] All modules, units, components used herein, unless explicitly excluded herein,
may be software modules or hardware processors, the processors being a general-purpose
processor, a special purpose processor, a conventional processor, a digital signal processor
(DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP
10 core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field
Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[00039] As used herein the transceiver unit include at least one receiver and at least one
transmitter configured respectively for receiving and transmitting data, signals, information or
a combination thereof between units/components within the system and/or connected with the
15 system.
[00040] As used herein, the virtual network function (VNF) refers to a network function
node that operates in virtualized environments such as virtual machines or containers. This virtualization allows for dynamic scaling and rapid adaptation to changing network conditions, improving reducing hardware requirement.
20 [00041] As used herein, the virtual network function component (VNFC) refers to a sub-
component within a virtual network function (VNF) that performs a specific task or set of tasks related to the overall network function. VNFCs reduces VNFs into smaller units, each responsible for unique functions, such as packet inspection, policy enforcement, etc.
[00042] As used herein, the containerized network function (CNF) refers to a network
25 function that act as portable container, which include all necessary configurations. CNFs offer
increased portability, and scalability compared to traditional network functions.
[00043] As used herein, the Containerized Network Function Component (CNFC) refers
to a subcomponent of a Containerized Network Function (CNF) that performs a specific task
or set of tasks within the broader network function. CNFCs are deployed in containers, offering
30 the same advantages as CNFs, which include efficient resource management.
9

[00044] As discussed in the background section, the current known solutions have
several shortcomings. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing method and system for performing dynamic resource management in a communication network.
5 [00045] FIG. 1 illustrates an exemplary block diagram representation of 5th generation
core (5GC) network architecture, in accordance with exemplary implementation of the present disclosure. As shown in FIG. 1, the 5GC network architecture [100] includes a user equipment (UE) [102], a radio access network (RAN) [104], an access and mobility management function (AMF) [106], a Session Management Function (SMF) [108], a Service Communication Proxy
10 (SCP) [110], an Authentication Server Function (AUSF) [112], a Network Slice Specific
Authentication and Authorization Function (NSSAAF) [114], a Network Slice Selection Function (NSSF) [116], a Network Exposure Function (NEF) [118], a Network Repository Function (NRF) [120], a Policy Control Function (PCF) [122], a Unified Data Management (UDM) [124], an application function (AF) [126], a User Plane Function (UPF) [128], a data
15 network (DN) [130], wherein all the components are assumed to be connected to each other in
a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
[00046] Radio Access Network (RAN) [104] is the part of a mobile telecommunications
system that connects user equipment (UE) [102] to the core network (CN) and provides access
20 to different types of networks (e.g., 5G network). It consists of radio base stations and the radio
access technologies that enable wireless communication.
[00047] Access and Mobility Management Function (AMF) [106] is a 5G core network
function responsible for managing access and mobility aspects, such as UE registration,
connection, and reachability. It also handles mobility management procedures like handovers
25 and paging.
[00048] Session Management Function (SMF) [108] is a 5G core network function
responsible for managing session-related aspects, such as establishing, modifying, and releasing sessions. It coordinates with the User Plane Function (UPF) for data forwarding and handles IP address allocation and QoS enforcement.
10

[00049] Service Communication Proxy (SCP) [110] is a network function in the 5G core
network that facilitates communication between other network functions by providing a secure and efficient messaging service. It acts as a mediator for service-based interfaces.
[00050] Authentication Server Function (AUSF) [112] is a network function in the 5G
5 core responsible for authenticating UEs during registration and providing security services. It
generates and verifies authentication vectors and tokens.
[00051] Network Slice Specific Authentication and Authorization Function (NSSAAF)
[114] is a network function that provides authentication and authorization services specific to network slices. It ensures that UEs can access only the slices for which they are authorized.
10 [00052] Network Slice Selection Function (NSSF) [116] is a network function
responsible for selecting the appropriate network slice for a UE based on factors such as subscription, requested services, and network policies.
[00053] Network Exposure Function (NEF) [118] is a network function that exposes
capabilities and services of the 5G network to external applications, enabling integration with
15 third-party services and applications.
[00054] Network Repository Function (NRF) [120] is a network function that acts as a
central repository for information about available network functions and services. It facilitates the discovery and dynamic registration of network functions.
[00055] Policy Control Function (PCF) [122] is a network function responsible for
20 policy control decisions, such as QoS, charging, and access control, based on subscriber
information and network policies.
[00056] Unified Data Management (UDM) [124] is a network function that centralizes
the management of subscriber data, including authentication, authorization, and subscription information.
25 [00057] Application Function (AF) [126] is a network function that represents external
applications interfacing with the 5G core network to access network capabilities and services.
[00058] User Plane Function (UPF) [128] is a network function responsible for handling
user data traffic, including packet routing, forwarding, and QoS enforcement.
11

[00059] Data Network (DN) [130] refers to a network that provides data services to user
equipment (UE) in a telecommunications system. The data services may include but are not limited to Internet services, private data network related services.
[00060] FIG. 2 illustrates an exemplary block diagram of a computing device [200] (also
5 referred to herein as computer system [200]) upon which the features of the present disclosure
may be implemented in accordance with exemplary implementation of the present disclosure.
In an implementation, the computing device [200] may also implement a method for
performing dynamic resource management in a communication network utilising the system.
In another implementation, the computing device [200] itself implements the method for
10 performing dynamic resource management in a communication network, using one or more
units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
[00061] The computing device [200] may include a bus [202] or other communication
mechanism for communicating information, and a hardware processor [204] coupled with bus [202] for processing information. The hardware processor [204] may be, for example, a general-purpose microprocessor. The computing device [200] may also include a main memory [206], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204]. The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204]. Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204].
[00062] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive
is provided and coupled to the bus [202] for storing information and instructions. The
computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode
ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic
30 LED (OLED) display, etc. for displaying information to a computer user. An input device
[214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor
12

[204]. Another type of user input device may be a cursor controller [216], such as a mouse, a
trackball, or cursor direction keys, for communicating direction information and command
selections to the processor [204], and for controlling cursor movement on the display [212].
This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a
5 second axis (e.g., y), that allow the device to specify positions in a plane.
[00063] The computing device [200] may implement the techniques described herein
using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine. According to one implementation, the techniques
10 herein are performed by the computing device [200] in response to the processor [204]
executing one or more sequences of one or more instructions contained in the main memory [206]. Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210]. Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described
15 herein. In alternative implementations of the present disclosure, hard-wired circuitry may be
used in place of or in combination with software instructions.
[00064] The computing device [200] also may include a communication interface [218]
coupled to the bus [202]. The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222]. For
20 example, the communication interface [218] may be an integrated services digital network
(ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such
25 implementation, the communication interface [218] sends and receives electrical,
electromagnetic or optical signals that carry digital data streams representing various types of information.
[00065] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the communication interface
30 [218]. In the Internet example, a server [230] might transmit a requested code for an application
program through the Internet [228], the ISP [226], the local network [222], a host [224] and the communication interface [218]. The received code may be executed by the processor [204]
13

as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
[00066] The computing device [200] encompasses a wide range of electronic devices
capable of processing data and performing computations. Examples of computing device [200]
5 include, but are not limited only to, personal computers, laptops, tablets, smartphones, servers,
and embedded systems. The devices may operate independently or as part of a network and
can perform a variety of tasks such as data storage, retrieval, and analysis. Additionally,
computing device [200] may include peripheral devices, such as monitors, keyboards, and
printers, as well as integrated components within larger electronic systems, showcasing their
10 versatility in various technological applications.
[00067] Referring to FIG. 3, an exemplary block diagram of a system [300] for
performing dynamic resource management in a communication network is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises at least one transceiver unit [302], at least one capacity management platform (CMP)
15 microservice [304], at least one network function virtualization decision analytics NPDA
microservice [306], at least one an initiating unit [308], at least one trigger unit [310], and at least one validating unit [312]. Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other.
20 Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple
such units, or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure. In an implementation, the system [300] may reside in a server or a network entity. Further, in another implementation, the system [300] may be present in a user device to implement the features of the present disclosure. The system
25 [300] may be a part of the user device / or may be independent of but in communication with
the user device (may also referred to herein as a UE). In yet another implementation, the system [300] may reside partly in the server/ network entity and partly in the user device.
[00068] The system [300] is configured for performing dynamic resource management
in a communication network with the help of the interconnection between the components/units
30 of the system [300].
14

[00069] The system [300] comprises a transceiver unit [302] configured to receive, at a
capacity management platform (CMP) microservice [304], a breached response output comprising a resource threshold limit exceed event.
[00070] The transceiver unit [302] receives the breached response output at the capacity
5 management platform (CMP) microservice [304]. The breached response output comprises the
resource threshold limit exceed event. In an exemplary aspect, the breached response output
received at the transceiver unit [302] acts like a notification indicating that a resource threshold
limit of various components such as but not limited to a central processing unit (CPU), random
access memory (RAM), storage, etc. has exceeded the prescribed limit. For example, the
10 resource threshold limit exceed event is only fired if system constraint (i.e. CPU/RAM/Storage)
usage exceed above-prescribed limit.
[00071] In an exemplary aspect, the breached response output is derived from a create
task event, which means it was generated due to an ongoing task or process that has surpassed
15 acceptable resource usage limits.
[00072] In an exemplary aspect, threshold limit exceed event indicates that the resources
are under strain and could lead to performance issues if not addressed.
[00073] In an exemplary aspect, the breached response output is generated during an
20 instantiation phase. In an exemplary aspect, the instantiation phase is a period when resources
are initially allocated or set up for a new task, service, or application within the network. During this phase, the system [300] monitors resource usage closely as the new resources are being put into operation.
[00074] The transceiver unit [302] within the CMP microservice [304] receives the
25 breached response output during the instantiation phase suggesting that resource usage during
the setup phase has exceeded the threshold limits.
[00075] The transceiver unit [302] is configured to transmit, from the CMP microservice
[304], the resource threshold limit exceed event to a network function virtualization platform decision analytics (NPDA) microservice [306].
30 [00076] In an exemplary aspect, upon receiving the breached response output, the
transceiver Unit [302] transmits the resource threshold limit exceed event to the network
15

function virtualization platform decision analytics (NPDA) [306] using CMP Microservice [304].
[00077] The system [300] further comprises an initiating unit [308] connected to at least
the transceiver unit [302]. The initiating unit [308] is configured to initiate, at the NPDA
5 microservice [306], a hysteresis process in response to the resource threshold limit exceed
event, wherein the hysteresis process is performed on resource constraints.
[00078] The initiating Unit [308] at NPDA microservice [306] initiates the hysteresis
process based on the received resource threshold limit exceed event. In an exemplary aspect,
the hysteresis process involves analysing the current resource constraints to ascertain how to
10 adjust the resources to maintain system stability and performance.
[00079] In an exemplary aspect, the resource constraints comprise at least one from
among a central processing unit (CPU), a Random Access Memory (RAM), and a storage level.
[00080] In an exemplary aspect, the hysteresis process, in response to the resource
threshold exceed event, is performed on various resource constraints which include at least one
15 from among a central processing unit (CPU), random access memory (RAM), storage etc. The
hysteresis process analyses at least one of the current resource constraints to determine how to adjust the resources to maintain system stability and performance.
[00081] In an exemplary aspect, the hysteresis process involves a dynamic adjustment
of resource allocation parameters in real-time.
20 [00082] The hysteresis process involves dynamically adjusting resource allocation
parameters in real-time to manage system resources effectively. In an exemplary aspect, resource allocation parameters are adjusted dynamically based on current usage patterns. For example, if CPU usage exceeds its threshold exceed event, additional virtual machines or containers might be provisioned. Similarly, if usage drops significantly, resources might be
25 scaled back to avoid over-provisioning.
[00083] In an exemplary aspect, the resource threshold limit exceed event is triggered
when the usage of resource constraints exceeds a predetermined limit.
[00084] In an exemplary aspect, the resource threshold limit exceed event is triggered
when the usage of critical resource constraints such as but not limited to CPU, RAM, storage
16

etc. surpasses a predefined threshold. This predetermined limit acts as an indicator to monitor
resource consumption. When actual usage exceeds this predetermined limit, a signal indicating
the potential risk of system overload is notified. In an exemplary aspect, the resource threshold
limit exceed event is generated to alert the system about the excessive resource usage,
5 prompting it to initiate corrective actions to manage and balance the load, thereby maintaining
optimal system performance and preventing potential failures.
[00085] The system further comprises a validating unit [312] configured to validate the
resource constraints at the NPDA microservice [306].
[00086] The validating unit [312] validates the resource constraints using the NPDA
10 microservice [306]. In an exemplary aspect, the validating unit [312] validates by verifying the
accuracy and reliability of the resource constraint data at the NPDA microservice [306]. By
confirming the validity of this data, the validating unit [312] supports informed decision-
making regarding resource scaling actions, helping to prevent erroneous or inefficient
adjustments and enhancing the overall effectiveness of the system’s dynamic adjustment of
15 resource allocation parameters.
[00087] The system [300] further comprises a trigger unit [310] connected to at least the
initiating unit [308]. The trigger unit [310] is further configured to trigger, at the NPDA microservice [306], one of scale in or scale out actions associated with network resources, based on the hysteresis process, to manage the network resources.
20 [00088] The trigger unit [310] triggers, based on the hysteresis process, one of scale in
or scale out actions related to the network resources in order to efficiently and effectively manage the network resources. In an exemplary aspect, the trigger unit [310] triggers, at the NPDA microservice [306], scale in action which reduces the number of resources allocated by shutting down unnecessary network resources, Similarly, the trigger unit [310] triggers, at the
25 NPDA microservice [306], scale out action by adding additional resources to accommodate the
increased load.
[00089] In an exemplary aspect, the resource constraints are validated before triggering
the one of scale in or scale out actions.
[00090] Before triggering a scale in or scale out action by the trigger unit [310], the
30 resource constraints are validated to ensure that the decision to adjust resources is based on
17

accurate and reliable data. This validation process involves checking the reported resource
constraints usage such as CPU, RAM, storage etc, against threshold limit exceed event. By
performing this validation, the system [300] avoids making dynamic adjustments based on
erroneous or temporary spikes in resource usage, thereby preventing unnecessary scaling
5 actions that could destabilize the system or lead to inefficiencies. This careful validation
ensures that scale in or scale out actions are executed, when necessary, based on validated resource constraints, thus optimizing system performance and stability.
[00091] In an exemplary aspect, the network resources comprise at least one of a virtual
network function (VNF), a virtual network function component (VNFC), containerized
10 network function (CNF), and a containerized network function component (CNFC).
[00092] In an exemplary aspect, the virtual network function (VNF) refers to a network
function node that operates in virtualized environments such as virtual machines or containers. This virtualization allows for dynamic scaling and rapid adaptation to changing network conditions, improving and reducing hardware requirements.
15 [00093] In an exemplary aspect, the virtual network function component (VNFC) refers
to a sub-component within a virtual network function (VNF) that performs a specific task or set of tasks related to the overall network function. VNFCs reduce VNFs into smaller units, each responsible for unique functions, such as packet inspection, etc.
[00094] In an exemplary aspect, the containerized network function (CNF) refers to a
20 network function that acts as a portable container, which includes all necessary configurations.
CNFs offer increased portability, and scalability compared to traditional network functions.
More particularly, the containerized network function (CNF) is a network function node that
is implemented and deployed within lightweight, portable containers, which include all
necessary dependencies and configurations. By using containerization technology, CNFs offer
25 increased portability, efficiency, and scalability compared to traditional network functions.
Containers enable CNFs to run consistently across various environments, from on-premises data centres to cloud platforms, while allowing for dynamic scaling and rapid deployment.
[00095] In an exemplary aspect, the Containerized Network Function Component
(CNFC) refers to a subcomponent of a Containerized Network Function (CNF) that performs
30 a specific task or set of tasks within the broader network function. CNFCs are deployed in
18

containers, offering the same advantages as CNFs, which include efficient resource management.
[00096] Referring to FIG. 4, an exemplary method flow diagram [400] for performing
dynamic resource management in a communication network in accordance with exemplary
5 implementations of the present disclosure is shown. In an implementation the method [400] is
performed by the system [300]. Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step [402].
[00097] At step 404, the method [400] comprises receiving, by a transceiver unit [302]
10 at a capacity management platform (CMP) microservice [304], a breached response output
comprising a resource threshold limit exceed event.
[00098] The transceiver unit [302] receives the breached response output response at the
capacity management platform (CMP) microservice [304]. The breached response output
comprises the resource threshold limit exceed event. In an exemplary aspect, the breached
15 response output received at the transceiver unit [302] acts like a notification indicating that a
resource threshold limit of various components such as but not limited to central processing unit (CPU), random access memory (RAM), storage etc. has been exceeded prescribed limit.
[00099] In an exemplary aspect, the breached response output is derived from a create
task event, which means it was generated due to an ongoing task or process that has surpassed
20 acceptable resource usage limits.
[000100] In an exemplary aspect, threshold limit exceed event indicates that the resources
are under strain and could lead to performance issues if not addressed.
[000101] In an exemplary aspect, the breached response output is generated during an
instantiation phase. In an exemplary aspect, the instantiation phase is a period when resources
25 are initially allocated or set up for a new task, service, or application within the network. During
this phase, the system [300] monitors resource usage closely as the new resources are being put into operation.
19

[000102] The transceiver unit [302] within the CMP microservice [304] receives the
breached response output during the instantiation phase suggesting that resource usage during the setup phase has exceeded the threshold limits.
[000103] At step 406, the method [400] further comprises transmitting, by the transceiver
5 unit [302] from the CMP microservice [304], the resource threshold limit exceed event to a
network function virtualization platform decision analytics NPDA microservice [306].
[000104] In an exemplary aspect, upon receiving the breached response output, the
transceiver Unit [302] transmits the resource threshold limit exceed event to network function virtualization platform decision analytics NPDA [306] using CMP microservice [304].
10 [000105] At step 408, the method [400] further comprises initiating, by an initiating unit
[308] at the NPDA microservice [306], a hysteresis process in response to the resource threshold limit exceed event, wherein the hysteresis process is performed on resource constraints.
[000106] The initiating Unit [308] at the NPDA microservice [306] initiates the hysteresis
15 process based on the received resource threshold limit exceed event. In an exemplary aspect,
the hysteresis process involves analysing the current resource constraints to ascertain how to adjust the resources to maintain system stability and performance.
[000107] In an exemplary aspect, the resource constraints comprise at least one from
among a central processing unit (CPU), a Random Access Memory (RAM), a storage level.
20 [000108] In an exemplary aspect, the hysteresis process, in response to the resource
threshold exceed event, is performed on various resource constraints which include at least one from among central processing unit (CPU), random access memory (RAM), storage etc. The hysteresis process analyses at least one of the current resource constraints to determine how to adjust the resources to maintain system stability and performance.
25 [000109] In an exemplary aspect, the hysteresis process involves a dynamic adjustment
of resource allocation parameters in real-time.
[000110] The hysteresis process involves dynamically adjusting resource allocation
parameters in real-time to manage system resources effectively. In an exemplary aspect, resource allocation parameters are adjusted dynamically based on current usage patterns. For
20

example, if CPU usage exceeds its threshold exceed event, additional virtual machines or containers might be provisioned. Similarly, if usage drops significantly, resources might be scaled back to avoid over-provisioning.
[000111] In an exemplary aspect, the resource threshold limit exceed event is triggered
5 when usage of resource constraints exceeds a predetermined limit.
[000112] In an exemplary aspect, the resource threshold limit exceed event is triggered
when the usage of critical resource constraints such as but not limited to CPU, RAM, storage
etc. surpasses a predefined threshold. This predetermined limit acts as an indicator to monitor
resource consumption. When actual usage exceeds this predetermined limit, a signal indicating
10 potential risk of system overload is notified. In an exemplary aspect, the resource threshold
limit exceed event is generated to alert the system about the excessive resource usage, prompting it to initiate corrective actions to manage and balance the load, thereby maintaining optimal system performance and preventing potential failures.
[000113] The method [400] further comprises validating, by a validating unit [312], the
15 resource constraints at the NPDA microservice [306].
[000114] The validating unit [312] validates the resource constraints using the NPDA
microservice [306]. In an exemplary aspect, the validating unit [312] validates by verifying the
accuracy and reliability of the resource constraint data at the NPDA microservice [306]. By
confirming the validity of this data, the validating unit [312] supports informed decision-
20 making regarding resource scaling actions, helping to prevent erroneous or inefficient
adjustments and enhancing the overall effectiveness of the system’s dynamic adjustment of
resource allocation parameters.
[000115] At step 410, the method [400] further comprises triggering, by a trigger unit
[310] at the NPDA microservice [306], one of scale in or scale out actions associated with
25 network resources, based on the hysteresis process, to manage the network resources.
[000116] The trigger unit [310] triggers, based on the hysteresis process, one of scale in
or scale out actions related to the network resources in order to efficiently and effectively
manage the network resources. In an exemplary aspect, the trigger unit [310] triggers, at the
NPDA microservice [306], scale in action which reduces the number of resources allocated by
30 shutting down unnecessary network resources, Similarly, the trigger unit [310] triggers, at the
21

NPDA microservice [306], scale out action by adding additional resources to accommodate the increased load.
[000117] In an exemplary aspect, the resource constraints are validated before triggering
the one of scale in or scale out actions.
5 [000118] Before triggering a scale in or scale out action by the trigger unit [310], the
resource constraints are validated to ensure that the decision to adjust resources is based on
accurate and reliable data. This validation process involves checking the reported resource
constraints usage such as CPU, RAM, storage etc, against threshold limit exceed event. By
performing this validation, the system [300] avoids making dynamic adjustments based on
10 erroneous or temporary spikes in resource usage, thereby preventing unnecessary scaling
actions that could destabilize the system or lead to inefficiencies. This careful validation ensures that scale in or scale out actions are executed, when necessary, based on validated resource constraints, thus optimizing system performance and stability.
[000119] In an exemplary aspect, the network resources comprise at least one of a virtual
15 network function (VNF), a virtual network function component (VNFC), containerized
network function (CNF), and a containerized network function component (CNFC).
[000120] In an exemplary aspect, the method is implemented at the application level
within a microservices architecture.
[000121] At step 412, the method [400] terminates.
20 [000122] Referring to FIG. 5, an exemplary block diagram of a system architecture [500]
for performing dynamic resource management in a communication network is shown, in accordance with the exemplary implementations of the present disclosure.
[000123] The system architecture [500] comprises CMP microservice [304] for
transmitting the resource threshold limit exceed event during the instantiation phase to an
25 NPDA microservice [306]. In an exemplary aspect, threshold limit exceed event indicates that
the resources are under strain and could lead to performance issues if not addressed. In an exemplary aspect, the within the CMP microservice [304] transmits resource threshold limit exceed event during the instantiation phase suggesting that resource usage during the setup phase has exceeded the threshold limits.
22

[000124] The NPDA [306] then performs a hysteresis process on resource constraints and
further transmits the same to the scale in/scale out operations module [502]. In an exemplary
aspect, the hysteresis process involves analysing the current resource constraints to ascertain
how to adjust the resources to maintain system stability and performance. In an exemplary
5 aspect, the hysteresis process, in response to the resource threshold exceed event, is performed
on various resource constraints which include at least one from among central processing unit (CPU), random access memory (RAM), storage etc. The hysteresis process analyses at least one of the current resource constraints to determine how to adjust the resources to maintain system stability and performance.
10 [000125] The scale in/scale out operations module [502] triggers, based on the hysteresis
process, one of scale in or scale out actions related with the network resources in order to efficiently and effectively manage the network resources. In an exemplary aspect, the trigger unit [310] triggers, at the NPDA microservice [306], scale in action which reduces the number of resources allocated by shutting down unnecessary network resources, Similarly, the trigger
15 unit [310] triggers, at the NPDA microservice [306], scale out action by adding additional
resources to accommodate the increased load.
[000126] Referring to FIG. 6, an exemplary method process [600] diagram for
performing dynamic resource management in a communication network in accordance with
exemplary implementations of the present disclosure is shown. In an implementation the
20 process [600] is performed by the system [300]. Also, as shown in FIG. 6, the process [600]
starts at step [602].
[000127] At step 604, the process [600] comprises transmitting, from the CMP
microservice [304], data related to resource threshold limit exceed event during instantiation
phase to the NPDA microservice [306]. In an exemplary aspect, threshold limit exceed event
25 indicates that the resources are under strain and could lead to performance issues if not
addressed. In an exemplary aspect, the instantiation phase is a period when resources are initially allocated or set up for a new task, service, or application within the network.
[000128] At step 606, the process [600] comprises performing, at the NPDA microservice
[306], hysteresis process based on the received resource threshold limit exceed event. In an
30 exemplary aspect, the hysteresis process involves analysing the current resource constraints to
ascertain how to adjust the resources to maintain system stability and performance. The
23

hysteresis process involves dynamically adjusting resource allocation parameters in real-time to manage system resources effectively. In an exemplary aspect, resource allocation parameters are adjusted dynamically based on current usage patterns. For example, if CPU usage exceeds its threshold exceed event, additional virtual machines or containers might be provisioned. Similarly, if usage drops significantly, resources might be scaled back to avoid over-provisioning.
[000129] At step 608, triggering, at the scale in/ scale out operations module [502], one
of scale in or scale out actions associated with network resources, based on the hysteresis process, to manage the network resources. In an exemplary aspect, the scale in/ scale out operations module [502] triggers, based on the hysteresis process, one of scale in or scale out actions related to the network resources in order to efficiently and effectively manage the network resources. In an exemplary aspect, the trigger unit [310] triggers, at the NPDA microservice [306], scale in action which reduces the number of resources allocated by shutting down unnecessary network resources, Similarly, the trigger unit [310] triggers, at the NPDA microservice [306], scale out action by adding additional resources to accommodate the increased load.
[000130] At step 610, process [600] terminates.
[000131] The present disclosure further discloses a non-transitory computer readable
storage medium storing instructions for performing dynamic resource management in a communication network, the instructions include executable code which, when executed by one or more units of a system, causes a transceiver unit to receive, at a Capacity Management platform (CMP) microservice, a breached response output comprising a resource threshold limit exceed event. The executable code when executed further causes the transceiver unit to transmit, from the CMP microservice, the resource threshold limit exceed event to a network function virtualization platform decision analytics (NPDA) microservice. The executable code when executed further causes an initiating unit to initiate, at the NPDA microservice, a hysteresis process in response to the resource threshold limit exceed event, wherein the hysteresis process is performed on resource constraints. The executable code when executed further causes a trigger unit to trigger, at the NPDA microservice, one of scale in or scale out actions associated with network resources, based on the hysteresis process, to manage the network resources.

[000132] As is evident from the above, the present disclosure provides a technically
advanced solution for performing dynamic resource management in a communication network. The present invention provides a solution for performing dynamic and automated resource management that adapts to changing workloads and resource demands in microservices architecture, ensuring optimal system performance while preventing overload.
[000133] Further, in accordance with the present disclosure, it is to be acknowledged that
the functionality described for the various the components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[000134] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.

We Claim:
1. A method for performing dynamic resource management in a communication network,
the method comprising:
receiving, by a transceiver unit [302] at a capacity management platform (CMP) microservice [304], a breached response output comprising a resource threshold limit exceed event;
transmitting, by the transceiver unit [302] from the CMP microservice [304], the resource threshold limit exceed event to network function virtualization platform decision analytics (NPDA) microservice [306];
initiating, by an initiating unit [308] at the NPDA microservice [306], a hysteresis process in response to the resource threshold limit exceed event, wherein the hysteresis process is performed on resource constraints;
triggering, by a trigger unit [310] at the NPDA microservice [306], one of scale in or scale out actions associated with network resources, based on the hysteresis process, to manage the network resources.
2. The method as claimed in claim 1, wherein the breached response output is generated during an instantiation phase.
3. The method as claimed in claim 1, wherein the resource threshold limit exceed event is triggered when usage of resource constraints exceeds a predetermined limit.
4. The method as claimed in claim 1, further comprising validating, by a validating unit [312], the resource constraints at the NPDA microservice [306].
5. The method as claimed in claim 4, wherein the resource constraints are validated before triggering the one of scale in or scale out actions.
6. The method as claimed in claim 1, wherein the hysteresis process involves a dynamic adjustment of resource allocation parameters in real-time.

7. The method as claimed in claim 1, wherein the network resources comprise at least one of a virtual network function (VNF), a virtual network function component (VNFC), containerized network function (CNF), and a containerized network function component (CNFC).
8. The method as claimed in claim 1, wherein the resource constraints comprise at least one from among a central processing unit (CPU), a Random Access Memory (RAM), a storage level.
9. A system for performing dynamic resource management in a communication network, the system comprising:
a transceiver unit [302] configured to:
receive, at a capacity management platform (CMP) microservice [304], a breached response output comprising a resource threshold limit exceed event;
transmit, from the CMP microservice [304], the resource threshold limit exceed event to a network function virtualization platform decision analytics (NPDA) microservice [306];
an initiating unit [308] connected to at least the transceiver unit [302], wherein the initiating unit [308] configured to:
initiate, at the NPDA microservice [306], a hysteresis process in response to the resource threshold limit exceed event, wherein the hysteresis process is performed on resource constraints;
a trigger unit [310] connected to at least the initiating unit [308], wherein the trigger unit [310] configured to:
trigger, at the NPDA microservice [306], one of scale in or scale out actions associated with network resources, based on the hysteresis process, to manage the network resources.
10. The system as claimed in claim 9, wherein the breached response output is generated
during an instantiation phase.

11. The system as claimed in claim 9, wherein the resource threshold limit exceed event is triggered when usage of resource constraints exceeds a predetermined limit.
12. The system as claimed in claim 9, wherein further comprises a validating unit [312] configured to validate the resource constraints at the NPDA microservice [306].
13. The system as claimed in claim 12, wherein the resource constraints are validated before triggering the one of scale in or scale out actions.
14. The system as claimed in claim 9, wherein the hysteresis process involves a dynamic adjustment of resource allocation parameters in real-time.
15. The system as claimed in claim 9, wherein the network resources comprise at least one of a virtual network function (VNF), a virtual network function component (VNFC), containerized network function (CNF), and a containerized network function component (CNFC).
16. The system as claimed in claim 9, wherein the resource constraints comprise at least one from among a central processing unit (CPU), a Random Access Memory (RAM), a storage level.

Documents

Application Documents

# Name Date
1 202321061575-STATEMENT OF UNDERTAKING (FORM 3) [13-09-2023(online)].pdf 2023-09-13
2 202321061575-PROVISIONAL SPECIFICATION [13-09-2023(online)].pdf 2023-09-13
3 202321061575-POWER OF AUTHORITY [13-09-2023(online)].pdf 2023-09-13
4 202321061575-FORM 1 [13-09-2023(online)].pdf 2023-09-13
5 202321061575-FIGURE OF ABSTRACT [13-09-2023(online)].pdf 2023-09-13
6 202321061575-DRAWINGS [13-09-2023(online)].pdf 2023-09-13
7 202321061575-Proof of Right [09-01-2024(online)].pdf 2024-01-09
8 202321061575-FORM-5 [11-09-2024(online)].pdf 2024-09-11
9 202321061575-ENDORSEMENT BY INVENTORS [11-09-2024(online)].pdf 2024-09-11
10 202321061575-DRAWING [11-09-2024(online)].pdf 2024-09-11
11 202321061575-CORRESPONDENCE-OTHERS [11-09-2024(online)].pdf 2024-09-11
12 202321061575-COMPLETE SPECIFICATION [11-09-2024(online)].pdf 2024-09-11
13 202321061575-Request Letter-Correspondence [18-09-2024(online)].pdf 2024-09-18
14 202321061575-Power of Attorney [18-09-2024(online)].pdf 2024-09-18
15 202321061575-Form 1 (Submitted on date of filing) [18-09-2024(online)].pdf 2024-09-18
16 202321061575-Covering Letter [18-09-2024(online)].pdf 2024-09-18
17 202321061575-CERTIFIED COPIES TRANSMISSION TO IB [18-09-2024(online)].pdf 2024-09-18
18 Abstract 1.jpg 2024-10-07
19 202321061575-FORM 3 [07-10-2024(online)].pdf 2024-10-07
20 202321061575-ORIGINAL UR 6(1A) FORM 1 & 26-070125.pdf 2025-01-14