Abstract: The present disclosure provides a system 200 and a method 400 for protecting a network function (NF) against overload. The system 200 ensures that the NF's operation persists even under high traffic conditions by systematically evaluating multiple criteria. The method involves receiving a request message 402, checking if the number of open requests exceeds a threshold 404, and if not, assessing if the message queue size exceeds its threshold 408. If the queue size is within limits, it checks ingress and egress rates against their thresholds 410. Subsequently, it evaluates if the number of sessions in a cache exceeds a threshold 412, and if not, it compares the severity of application congestion to message priority 414. If the congestion severity is lower, it assesses various internal criteria to determine potential overload 416. If all checks are satisfactory, the request message is processed 418 and a response message is sent 420. Fig. 2
FORM 2
THE PATENTS ACT, 1970
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
APPLICANT JIO PLATFORMS LIMITED
380006, Gujarat, India; Nationality: India
The following specification particularly describes
the invention and the manner in which
it is to be performed
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
TECHNICAL FIELD
[0002] The present disclosure generally relates to a means to improve an
operation and performance of a communication network. In particular, the present
disclosure relates to systems and methods for protecting a network function against
overload.
BACKGROUND
[0003] The following description of related art may be intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0004] In a communication network, such as a 5G or 6G network, Network Functions (NF), such as Policy Control Function (PCF), Binding Support Function (BSF), Charging Function (CHF), Network Repository Function (NRF), etc. may be susceptible to overload due to internal and external causes. External causes may include signalling storms, network issues causing multiple retransmissions, problems with peer NFs, malicious attacks etc., while internal causes may include an over or improper utilization of processing and/or memory resources, and a
number of sessions exceeding capacity etc. Once overloaded, the NF may experience latency in message exchange, or in severe cases, the NF may crash. [0005] There is, therefore, a requirement in the art for a means to prevent overloading of a network function.
SUMMARY
[0006] In an exemplary embodiment, a method for protecting a network function against overload is described. The method comprises receiving, by a processing engine, a request message for open requests, detecting, by the processing engine, whether number of open requests being processed is greater than a threshold of number of requests processed, in response to detecting that the number of open requests being processed are not greater than the threshold of number of requests processed, detecting, by the processing engine, whether size of a message queue is greater than a threshold of message queue, in response to detecting that the size of the message queue is not greater than the threshold of message queue, detecting, by the processing engine, whether an ingress rate or an egress rate is greater than a threshold of ingress rate or egress rate, in response to detecting that the ingress rate or the egress rate is not greater than the threshold of ingress rate or egress rate, detecting, by the processing engine, whether a number of sessions currently in a cache is greater than a threshold of number of sessions, in response to detecting that the number of sessions currently in the cache is not greater than the threshold of number of sessions, detecting, by the processing engine, whether severity of congestion at an application is higher than a message priority, in response to detecting that the severity of congestion at the application is not higher than the message priority, detecting, by the processing engine, whether a plurality of internal criteria to assess if the application is going into overload state are satisfied, in response to detecting that the plurality of internal criteria to assess if the application is going into overload state are not satisfied, processing, by the processing engine (210), the request message and sending, by the processing engine, a response message.
[0007] In some embodiments, further comprises rejecting, by the processing engine, the request message if: in response to detecting that the number of open requests being processed are greater than the threshold of number of requests processing, in response to detecting that the size of the message queue is greater than the threshold of message queue, in response to detecting that the ingress rate or the egress rate is greater than the threshold of ingress rate or egress rate, in response to detecting that the number of sessions currently in the cache is greater than the threshold of number of sessions, in response to detecting that the severity of congestion at the application is higher than the message priority and in response to detecting that the plurality of internal criteria to assess if the application is going into overload state are satisfied.
[0008] In some embodiments, the plurality of internal criteria includes a central processing unit (CPU) usage, a memory consumption, a cache consumption, number of threads, a database (DB) size, number of sessions, stale sessions, plurality of channels, and plurality of listening ports.
[0009] In some embodiments, the plurality of NFs is configured to define a plurality of defined message queues for a plurality of defined interfaces with a plurality of defined configurable code.
[0010] In some embodiments, the NF is configured to provide three defined thresholds for raising alarms and corresponding defined alarm thresholds. [0011] In some embodiments, rejecting the request message based on priority defined by the network function or a user defined priority, wherein the user defined priority is based on an interface, a plurality of service operations, plurality of message types, wherein the plurality of message types includes the request message, a message answer or both.
[0012] In another exemplary embodiment, a system for protecting a network function (NF) against overload, the system is configured to receive, by a processing engine, a request message for open requests, detect, by the processing engine, whether number of open requests being processed are greater than a threshold of number of requests processed and store, by a database, the number of open requests.
[0013] In some embodiments, the plurality of internal criteria includes a central processing unit (CPU) usage, a memory consumption, a cache consumption, number of threads, a database (DB) size, number of sessions, stale sessions, plurality of channels, and plurality of listening ports.
[0014] In some embodiments, the processing engine is further configured to in response to detecting that the number of open requests being processed are not greater than the threshold of number of requests processing, detect, whether size of a message queue is greater than a threshold of message queue, in response to detecting that the size of the message queue is greater than the threshold of message queue, reject, the request, in response to detecting that the size of the message queue is not greater than the threshold of message queue, detect, whether an ingress rate or an egress rate is greater than a threshold of ingress rate or egress rate, in response to detecting that the ingress rate or the egress rate is greater than the threshold of ingress rate or egress rate, reject, the request, in response to detecting that the ingress rate or the egress rate is not greater than the threshold of ingress rate or egress rate, detect, whether number of sessions currently in a cache is greater than a threshold of number of sessions, in response to detecting that the number of sessions currently in the cache is greater than the threshold of number of sessions, reject, the request, in response to detecting that the number of sessions currently in the cache is not greater than the threshold of number of sessions, detect, whether severity of congestion at an application is higher than a message priority, in response to detecting that the severity of congestion at an application is higher than the message priority, reject, the request, in response to detecting that the severity of congestion at an application is not higher than the message priority, detect, = whether a plurality of internal criteria to assess if the application is going into overload state are satisfied, in response to detecting that the plurality of internal criteria to assess if the application is going into an overload state are satisfied, reject, the request, in response to detecting that the plurality of internal criteria to assess if the application is going into overload state are not satisfied, process, the request and send a response
[0015] In accordance with one embodiment of the present disclosure, computer program product comprising a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to processors to execute a method for protecting a network function (NF) against overload, the method comprising receiving, by a processing engine, a request message for open requests, detecting, by the processing engine, whether number of open requests being processed is greater than a threshold of number of requests processed, in response to detecting that the number of open requests being processed are not greater than the threshold of number of requests processed, detecting, by the processor, whether size of a message queue is greater than a threshold of message queue, in response to detecting that the size of the message queue is not greater than the threshold of message queue, detecting, by the processor, whether an ingress rate or an egress rate is greater than a threshold of ingress rate or egress rate, in response to detecting that the ingress rate or the egress rate is not greater than the threshold of ingress rate or egress rate, detecting, by the processor, whether a number of sessions currently in a cache is greater than a threshold of number of sessions, in response to detecting that the number of sessions currently in the cache is not greater than the threshold of number of sessions, detecting, by the processor, whether severity of congestion at an application is higher than a message priority, in response to detecting that the severity of congestion at the application is not higher than the message priority, detecting, by the processor, whether a plurality of internal criteria to assess if the application is going into overload state are satisfied, in response to detecting that the plurality of internal criteria to assess if the application is going into overload state are not satisfied, processing, by the processor, the request message and sending, by the processor, a response message.
[0016] In accordance with one embodiment of the present disclosure, a user equipment that is communicatively coupled with a network is disclosed. The coupling comprises of receiving, by a processing engine, a request message for open requests, detecting, by the processing engine, whether number of open requests being processed is greater than a threshold of number of requests
processed, in response to detecting that the number of open requests being processed are not greater than the threshold of number of requests processed, detecting, by the processor, whether size of a message queue is greater than a threshold of message queue, in response to detecting that the size of the message 5 queue is not greater than the threshold of message queue, detecting, by the processor, whether an ingress rate or an egress rate is greater than a threshold of ingress rate or egress rate, in response to detecting that the ingress rate or the egress rate is not greater than the threshold of ingress rate or egress rate, detecting, by the processor, whether a number of sessions currently in a cache is greater than a
10 threshold of number of sessions, in response to detecting that the number of sessions currently in the cache is not greater than the threshold of number of sessions, detecting, by the processor, whether severity of congestion at an application is higher than a message priority, in response to detecting that the severity of congestion at the application is not higher than the message priority, detecting, by
15 the processor, whether a plurality of internal criteria to assess if the application is going into overload state are satisfied, in response to detecting that the plurality of internal criteria to assess if the application is going into overload state are not satisfied, processing, by the processor, the request message and sending, by the processor, a response message.
20
OBJECTS OF THE INVENTION
[0017] An object of the present invention is to provide a system and a method
for protecting a network function against overload.
[0018] Another object of the present invention is to provide a system and a
25 method for facilitating a network function to selectively reject incoming traffic during periods of high traffic, based on predefined parameters. [0019] Another object of the present invention is to provide a system and a method for facilitating a network function to revert to normal operations after detecting that a period of high traffic is completed or over.
7
[0020] Another object of the present invention is to provide a system and a method for facilitating a network function to not provide additional overloading traffic to peer nodes.
[0021] Another object of the present invention is to provide a system and a 5 method for facilitating a network function to protect key operating resources during periods of high traffic.
[0022] Another object of the present invention is to provide a system and a method for facilitating a network function to operate even during internal inconsistencies or problems arising within the network function.
10
BRIEF DESCRIPTION OF DRAWINGS
[0023] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same
15 parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such
20 drawings includes the disclosure of electrical components, electronic components or circuitry commonly used to implement such components. [0024] FIG. 1 illustrates an exemplary network architecture in which or with which embodiments of the present disclosure may be implemented. [0025] FIG. 2 illustrates an exemplary block diagram of the system for
25 protecting network function against overload, in accordance with an embodiment of the present disclosure.
[0026] FIGs. 3A and 3B illustrate exemplary schematic diagrams of an architecture of the system for protecting network function against overload, in accordance with an embodiment of the present disclosure.
8
[0027] FIG. 4 illustrates a schematic flow diagram for a method for protecting network function against overload, in accordance with an embodiment of the present disclosure.
[0028] FIG. 5 illustrates an exemplary computer system in which or with which 5 embodiments of the present disclosure may be implemented, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
[0029] In the following description, for the purposes of explanation, various
10 specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not
15 address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0030] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure.
20 Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
25 [0031] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to
30 obscure the embodiments in unnecessary detail. In other instances, well-known
9
circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. [0032] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a 5 structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a
10 procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0033] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the
15 subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms
20 “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements. [0034] Reference throughout this specification to “one embodiment” or “an
25 embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” or “in some embodiments” in various places throughout this specification are not necessarily all referring to the
30 same embodiment. Furthermore, the particular features, structures, or
10
characteristics may be combined in any suitable manner in one or more embodiments.
[0035] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used 5 herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one
10 or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
[0036] The various embodiments of the present disclosure will be explained in detail with reference to FIGS. 1 – 5.
15 [0037] FIG. 1 illustrates an exemplary network architecture 100 in which or with which embodiments of the present disclosure may be implemented. Referring to FIG. 1, the network architecture 100 may include one or more computing devices or user equipment (104-1, 104-2…104-N) associated with one or more subscribers (102-1, 102-2…102-N) in an environment. A person of ordinary skill in the art will
20 understand that one or more subscribers (102-1, 102-2…102-N) may be individually referred to as the subscriber 102 and collectively referred to as the subscribers 102. Similarly, a person of ordinary skill in the art will understand that one or more user equipment (104-1, 104-2…104-N) may be individually referred to as the user equipment 104 and collectively referred to as the user equipment 104.
25 A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the disclosure. Although three user equipment 104 are depicted in FIG. 1, however any number of the user equipment 104 may be included without departing from the scope of the ongoing description.
30 [0038] In some embodiments, the user equipment 104 may include, but is not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart
11
phone, a phablet device, and so on), a wearable computer device(e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing 5 device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In some embodiments, the user equipment 104 may include, but is not limited to, any electrical, electronic, electro¬mechanical, or an equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a
10 general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the user equipment 104 may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the subscriber 102 or the
15 entity such as touch pad, touch enabled screen, electronic pen, and the like. A person of ordinary skill in the art will appreciate that the user equipment 104 may not be restricted to the mentioned devices and various other devices may be used. [0039] Referring to FIG. 1, the user equipment 104 may communicate with a system 200, for example, a system for protecting a network function against
20 overload. In some embodiments, the network 106 may include at least one of a Fifth Generation (5G) network, 6G network, or the like. The network 106 may enable the user equipment 104 to communicate with other devices in the network architecture 100 and/or with the system 108. The network 106 may include a wireless card or some other transceiver connection to facilitate this communication. In another
25 embodiment, the network 106 may be implemented as, or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like.
30 [0040] In another exemplary embodiment, the centralized server 112 may include or comprise, by way of example but not limitation, one or more of: a stand-
12
alone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at 5 least a portion of any of the above, some combination thereof.
[0041] Although FIG. 1 shows exemplary components of the network architecture 100, in other embodiments, the network architecture 100 may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or
10 alternatively, one or more components of the network architecture 100 may perform functions described as being performed by one or more other components of the network architecture 100.
[0042] FIG. 2 illustrates an exemplary block diagram of the system 200 for protecting the network function against overload. The system 200 may include one
15 or more processors 202 and a memory 204 communicably coupled to the one or more processors 202. The one or more processor(s) 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions.
20 Among other capabilities, one or more processor(s) 202 may be configured to fetch and execute computer-readable instructions stored in a memory 204 of the system 200. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network
25 service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non¬volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like. [0043] In some embodiments, the system 200 may include an interface(s) 206.
30 The interface(s) 206 may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the
13
like. The interface(s) 206 may facilitate communication of the system 200. The interface(s) 206 may also provide a communication pathway for one or more components of the system 200. Examples of such components may include, but are not limited to, processing unit/engine(s) 210 and a database 220. 5 [0044] The processing unit/engine(s) 210 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 210. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the
10 programming for the processing engine(s) 210 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) 210 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when
15 executed by the processing resource, implement the processing engine(s) 210. In such examples, the system 200 may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system 200 and the processing resource. In other examples, the processing engine(s) 210
20 may be implemented by an electronic circuitry.
[0045] In an embodiment, the system 200 may be implemented in the network function (NF) level of the communication network 106. In some embodiments, the NF may include any one or a combination of policy control function (PCF), binding support function (BSF), charging function (CHF), and network repository function
25 (NRF).
[0046] The system 200 may be configured to facilitate or cause the NF to perform a set of functions to prevent overload. The NF may be configured to implement one or more solutions to prevent one or both internal and external causes of overload.
30 [0047] In the event of external causes for overloading, the NF may define a priority for messages that the NF receives. The priority may be used to make
14
decisions based on which incoming messages may be rejected or accepted to prevent overloading of the NF.
[0048] In some embodiments, the priority may be defined based on any one or a combination of interface, service, service operation, message type, origin IP/port, 5 internal message priority, header, etc.
[0049] In some embodiments, the service operation may include a command code for diameter error.
[0050] In some embodiments, the message type may include a request message, an answer message, or a combination of both. 10 [0051] In some embodiments, the origin of the message may be from multiple IPs: Port combinations with wildcard support for IPv4 and IPv6 and port range. [0052] In some embodiments, the internal message priority may be defined between values of 0 and 31, with 0 corresponding to the highest priority and 31 corresponding to the lowest priority. In one instance, if no internal priority is pre-15 defined, and the message does not carry a specific header (e.g., “3gpp-sbi-message-priority”), the message may be assigned a default priority of 24. In another instance, if no internal priority is pre-defined but the message carries a specific header (e.g., “3gpp-sbi-message-priority”), the message may be assigned a priority according to the header. However, if an internal priority is assigned, that may be considered over 20 the value assigned according to the header.
[0053] In some embodiments, answer messages may be assigned a higher default priority. However, in the presence of a pre-defined internal message priority or header value, the answer messages may be assigned a priority accordingly. Such assignment of priority may prevent the answer message from not being rejected 25 first.
[0054] Further, in some embodiments, the congestion level may include a value. The value may be any one or a minor, major or critical value. The value may indicate how prone a message is to be rejected at levels above a certain congestion level at the NF. In the event of congestion, the message with a lower priority may 30 be rejected before a message with a higher priority level. For instance, a message with a congestion level of minor may be rejected once the congestion level at the
15
NF reaches a minor state. Furthermore, in some embodiments, all messages with undefined congestion levels may be provided the minor priority level. [0055] The NF may be configured to define ingress and egress throttling rates. Specifically, the NF may be configured to define the ingress throttling rate by 5 specifying a maximum capacity on the rate limit. For instance, the NF may calculate current transactions per second (TPS) based on a sliding window. In an embodiment, the sliding window may be about 10 seconds. To determine TPS at a time X+10 s, the NF may consider several messages from time X to time X+10 and divide an obtained value by 10.
10 [0056] The NF may further provide three different user-defined percentage values of rate limit per interface. The NF may raise alerts for minor, major, and critical levels using the user-defined percentage values. The NF may further define user-defined percentage values of the rate limit at which the minor, major, and/or critical alerts may be degraded or removed. The NF may raise or clear alerts if a
15 predefined number of subsequent measurements are respectively above or lower than the threshold value. In the event of a minor alert level, all messages with a congestion level in the priority set to minor may be prone to rejection with user-configurable answers. In the event of a major level alert, all messages with a congestion level in the priority set to minor may be first to be rejected, followed by
20 messages with a congestion level set to major. However, messages with congestion levels in priority set to critical may not be rejected. In the event of a critical level alert, all messages with a congestion level in the priority set to minor may be rejected first, followed by messages with a congestion level set to major, and lastly, critical messages may be rejected. In the event the traffic rate goes beyond 100%,
25 all messages may be discarded based on priority and congestion level sets. The NF may maintain short logs for messages being discarded/rejected due to throttling. The NFs may further include counters for current ingress TPS, throttled number of messages in minor, major and critical levels, message discarded, etc. [0057] The NF may be configured to define a rate limit for egress as per
30 maximum capacities. The NF may determine a current TPS based on the sliding window over a duration of time (e.g., 10s). The NF may further be configured to
16
provide three user-defined percentage values of rate limit, using which the NF may raise alerts for minor, major, and critical levels. The NF may raise or clear alerts if a predefined number of subsequent measurements are respectively above or lower than the threshold value. The NF may start throttling egress messages only after the 5 critical rate. The NF may maintain short logs for messages being discarded/rejected due to throttling. The NFs may further include counters for current ingress TPS, throttled number of messages in minor, major and critical levels, message discarded, etc. [0058] The NF may be further configured to keep track of pending
10 transactions. Specifically, both ingress and egress pending transactions may be tracked. Ingress pending transactions may refer to requests received by the NF for which an answer message has not been transmitted. Egress pending transactions may refer to requests sent to peer nodes for which an answer has not been received. The NF may determine or calculate the pending transactions after a predetermined
15 runtime. A user may configure the runtime separately for ingress and egress times. [0059] The NF may be configured to be set up to track the number of pending transactions for both incoming and outgoing. If the pending transactions exceed or fall below the specified count, a breach of the threshold is determined. In such an instance, the NF may raise the corresponding minor, major or critical alerts. In one
20 example, the NF calculates/captures the pending transaction count after every user-configurable runtime configuration time (range 100ms to 5 sec, default value 1 sec) for both the ingress and egress pending transactions. The NF scan also define the count for which if continuous count received for pending transaction (separate for ingress and egress pending transaction) is above/below threshold for this count then
25 only threshold is assumed to be breached/cleared. The NF can define multiple thresholds (minimum 3). In case, thresholds for ingress/egress pending transaction are breached, the NF may raise corresponding minor/major/critical alerts. Further, the NF may also provide the egress/ingress threshold at which the corresponding alert will be cleared, and the corresponding action will also be taken.
30 [0060] The NF may further provide thresholds for ingress and egress pending transactions at which the respective alerts may be cleared, and corresponding
17
countermeasures may also be ceased. For each ingress threshold, the NF may define a percentage of traffic beyond which further incoming messages may be rejected/discarded. The reject/discard behaviour may be configurable for all thresholds separately. Reject/discard may be based on the priority. For each egress 5 threshold, the NF may define a percentage of traffic reduction. The NF may be able to select an alternate destination NF for sending the traffic. For example, if the ingress pending transactions exceed 80% of the defined threshold, minor alerts may be triggered, initiating actions based on message priority and congestion levels. The reject/discard behaviour may be configurable for all thresholds separately, ensuring
10 efficient traffic management and resource utilization. Similarly, for egress transactions, the NF may define thresholds for reducing traffic and may selectively redirect traffic to alternate destinations based on congestion levels and message priorities. [0061] The NF may further be configured to track ongoing sessions and
15 support alarm and message rejection based on the total number of sessions currently provided by the NF. The NF may be able to define a maximum number of sessions supported, at least three different thresholds at which alarms may be generated, and corresponding abatement/clear thresholds. Further, at the defined thresholds, the NF may be able to reject/discard any new message based on priority. The control may
20 be available for overall and different sessions supported by the NF. The NF may provide a way to check current session utilization from command and offer the counters. The counters may be provided for any message rejected or discarded. For instance, if the total session count reaches 80% of the maximum sessions supported, the NF may trigger major alarms and begin rejecting new messages with lower
25 priority levels first to mitigate overload conditions. The NF provides command-line interfaces to check current session utilization and maintains counters for rejected or discarded messages, aiding in network management and optimization. [0062] In some cases, the NF may have internal issues, which may cause the system 200 to process a message at a reduced rate, potentially resulting in cases
30 where an incoming queue of messages may build up to a point where messages start getting timed out. To overcome such an issue, the NF may further define a message
18
queue, which may ensure that if a queue gets filled, new incoming messages to the queue may be rejected/discarded directly by the NF with configurable code, thus potentially preventing timeout cases. Priority based rejection may be applicable in this case. The NF may provide three thresholds for raising minor/major/critical 5 alarms and corresponding alarm abatement/clearing thresholds. The NF may further provide a means to check a current message queue size and the counter available that captures a maximum queue size during the counter duration. For example, if the message queue size exceeds 1000 messages, critical alarms may be raised, prompting the NF to discard lower-priority messages first.
10 [0063] The NF may further provide counters from the stack and transport layer, including that for a number of retransmissions. The NF may track this number although the NF may not directly control the stack and transport layer. [0064] Further, the NF may support Load Condition Indicators (LCI), Overload Condition Indicators (OCI), and message priority headers. The headers
15 may be defined as “3gpp-sbi-message-priority”. The message priority may be considered in combination with the internal message priority of the messages. Furthermore, the NF may provide redirect information as an optional feature in case of overload. [0065] The NF may further provide HTTP2 and Diameter Stacks-based
20 support. In some embodiments, the NF may provide an application program interface (API) for fetching the ingress and egress TPS. The NF may further provide an API for open session count. The NF may provide an API for retrieving the pending ingress and egress buffer size. Further, the NF may provide an API for retrieving stats for the stack, including the number of messages received, sent,
25 responses sent, responses received, failure count, success count, and timeout count. The NF may also provide an API to get latency slots for messages, processing time for messages received. The NF may also provide an API to configure maximum ingress message queue. The message queue may be incremented on receiving a message and decremented when the stack transmits a call-back to the application.
30 When the message queue is full, the stack may reject the incoming messages with
19
a configurable error code. When queue size decreases from the maximum value, the stack may accept the incoming messages.
[0066] In the event of internal causes for overloading, the NF may be configured to monitor internal resource usage, such as processing and memory 5 resource usage. The NF may also be configured to deploy countermeasures in an event of over utilization of such resources. Some of critical internal resources may be central processing unit (CPU), memory, cache, number of threads, size of database, number of sessions, stale sessions, channels, listening ports etc. The NF may include an internal audit service configured to monitor such resources and
10 deploy appropriate action to avoid the NF from crashing or becoming non-responsive.
[0067] For example, if CPU level rises above a defined threshold, then an incoming message is rejected with a defined error message with the creation of an alert. In case there is a further increase in CPU utilization above a specific threshold,
15 then the messages are discarded silently. There is also the option to reject with configured result code. The NF may maintain counters/logs for message rejected/discarded as well as created alerts. Similar approaches are taken in case of high Memory utilization. [0068] Further, the NF may periodically monitor the utilization of the
20 resources. In order for that to occur, there is a user configurable timer with a default value. When the NF receives a user configurable number of samples above threshold, based on the defined threshold and corresponding action, the NF may either drop or reject the messages with a user configurable error code/diameter error. This condition may be reversed once the NF receives a configurable number
25 of measurements of resource value below the user configured abatement/reversal thresholds.
[0069] In the case of stale sessions, in addition to the already implemented stale session audit, old sessions from a cache may be deleted if the number of sessions approaches a predefined limit. Similarly, the audit service may clear all unused
30 threads. The audit service may also monitor non-responsive ports and, when needed, generate appropriate alerts and deploy necessary clearing action.
20
[0070] Thus, the system 200 facilitates an operation of the NF to persist even
in an event of high traffic of incoming messages, such as during a signalling storm,
or during other network fluctuations. Further, once the high traffic of incoming
messages abates, the NF may be configured to revert to normal operations as soon 5 as possible. Specifically, the NF may be configured to determine that the high traffic
period is over, and that an instance may arrive where normal operation may begin.
[0071] Further, the NF may not cause a signalling storm towards its peers due
to reasons such as high stale session checks or a sudden traffic spike for real
subscribers, barring etc. 10 [0072] The NF may be configured to protect its key resources. For example,
even when the NF is facing 5-6 times the maximum traffic supported by the NF, the
key operational features of the NF, such as the command line interface (CLI),
command execution, alarms, counters, and other essential logs may not be
adversely impacted. 15 [0073] Furthermore, internal problems in the NF, such as problems between
the network and stack, between the stack and application, etc., may be captured
properly using logs and/or counters.
[0074] FIGS. 3A and 3B illustrate exemplary schematic diagrams 300, 350 of
an architecture of the system 200 for protecting the network function against 20 overload.
[0075] In an aspect, request ingress may refer to incoming data traffic entering
a transport layer.
[0076] In an aspect, request egress may refer to outgoing data traffic leaving
the transport layer. 25 [0077] Figure 3A illustrates the control flow of request processing across
different layers.
[0078] At step 302, the flow begins when an ingress request lands at the
listening port of the transport layer. The transport layer receives the ingress request
and forwards it to the stack. 30 [0079] At step 304, the stack directs request A to the application layer. The
request “A” might simply indicate the first in a sequence of requests.
21
[0080] At step 306, the application layer processes the request A.
[0081] At step 308, the application layer composes a response F.
[0082] At step 310, the response F is returned to the stack.
[0083] At step 312, the stack then forwards the response egress to the transport 5 layer.
[0084] Finally, the transport layer sends the response egress over the network. . Transport Layer: Receives the initial request and the final response. . Stack: Forwards requests to and from the application layer. . Application Layer: Processes the request and generates the response. 10 The latency at the application layer is defined as the time duration between when
the request is received from the stack and when the corresponding response is
handed back to the stack. The latency at the stack is the time duration between when
the request is received and when the response is sent out.
[0085] Figure 3B extends the scenario depicted in Figure 3A by illustrating 15 what happens when there are multiple requests queuing up at the stack due to the
limited processing capacity of the application layer. This queuing can lead to an
overload condition, where the stack is unable to process all incoming requests
promptly due to the backlog created by the slower processing speed of the
application layer. This scenario highlights the importance of managing processing 20 capacities and queue management to prevent overload and ensure efficient request
handling.
[0086] FIG. 4 illustrates a method 400 flow diagram for protecting the network
function against overload.
[0087] At step 402, the method 400 is configured to receive, by the processing 25 engine 210, a request message for open requests. This initial step involves the
system 200 receiving incoming messages that request network services. The request
message likely indicates a request for the NF to perform some action.
[0088] At step 404, the method 400 is configured to receive, by the processing
engine 210, whether the number of open requests being processed is greater than a
30 threshold of the number of requests processed.
The system 200 checks how many requests the NF is currently handling, referred
22
to as open requests. The system 200 compares this number to a predefined threshold value, known as the number of requests, processed. If the number of open requests is not greater than the threshold, the method proceeds to the next step, step 408. However, if the number of open requests exceeds the threshold, it suggests a 5 potential overload. In this case, method 400 might lead to step 406, where the request is rejected.
[0089] At step 408, the method is configured to detect, by the processing engine 210, whether the size of a message queue is greater than a threshold of the message queue. In response to detecting that the number of open requests being
10 processed are not greater than the threshold of number of requests processed, the processing engine 210 checks whether the size of an internal message queue is greater than a threshold of message queue. Only if the previous check 404 passed (open requests below threshold), the system 200 now checks the size of the internal message queue. This queue holds a number of incoming messages waiting to be
15 processed. The system 200 compares the queue size to a predefined threshold (message queue). If the queue size is not greater than the threshold, the method proceeds to the next step 410. However, if the queue size exceeds the threshold, it suggests potential overload due to a backlog of messages. In this case, the method might jump to step 406, where the request is rejected.
20 [0090] At step 410, the method is configured to detect, by the processing engine 210, whether the ingress rate or the egress rate is greater than a threshold of ingress rate or egress rate. In response to detecting that the size of the message queue is not greater than the threshold of message queue, the processing engine 210 checks whether an ingress rate or an egress rate is greater than a threshold of ingress
25 rate or egress rate. Only if the previous check in step 408 passed (queue size below threshold), the system 200 now checks the data transfer rates. It examines both the ingress rate (incoming data) and the egress rate (outgoing data). The system 200 compares these rates to predefined thresholds associated with ingress rate or egress rate. If both rates are not greater than their respective thresholds, the method 400
30 proceeds to the next step 412. However, if either rate exceeds its threshold, it
23
suggests potential overload due to excessive data traffic. In this case, the method might jump to step 406, where the request is rejected.
[0091] At step 412, the method is configured to detect, by the processing engine 210, whether the number of sessions currently in a cache is greater than a 5 threshold of the number of sessions. In response to detecting that the ingress rate or the egress rate is not greater than the threshold of ingress rate or egress rate, the processing engine 210 checks whether the number of sessions currently in a cache is greater than a threshold of number of sessions. Only if the previous check 410 passed (data rates below thresholds), the system 200 may now checks the number
10 of active sessions. The system 200 retrieves the number of sessions stored in its cache (ongoing connections) and compares this number to a predefined threshold (number of sessions). If the number of sessions is not greater than the threshold, the method proceeds to the next step 414. However, if the number of sessions exceeds the threshold, it suggests potential overload due to too many active connections. In
15 this case, the method might jump to step 406, where the request is rejected.
[0092] At step 414, the method is configured to detect, by the processing engine 210, whether the severity of congestion at an application is higher than the message priority. The severity of congestion may be evaluated using predefined levels: minor, major, and critical, like the message priority levels (0 to 31, with 0
20 being the highest priority). The messages may be prioritized numerically, with 0 being the highest and 31 the lowest. Likewise, messages are prone to rejection based on a current congestion level of the system 200. For instance, messages with a minor level may be rejected when the congestion level of the system 200 is above minor. Default priority is set at 24 if not specified, and answers typically receive a priority
25 of 15. When the processing engine 210 detects that the number of sessions currently in a cache is not greater than the threshold, it assesses whether the application’s congestion severity surpasses the message priority. The congestion severity levels and message priority levels are comparable because they both use a hierarchical classification to prioritize or reject messages. Messages are assigned priorities
30 based on various criteria such as interface, service, message type, origin IP/port, and the presence of the “3gpp-sbi-message-priority” header. If the congestion
24
severity level (minor, major, critical) is higher than the message priority, this indicates the application may be struggling to process the message, leading to the rejection of the message (step 406). Conversely, if the congestion severity is not higher than the message priority, it suggests the application can handle the request, 5 and the method proceeds to the next step (416).
[0093] At step 416, the method is configured to detect, by the processing engine 210, whether a plurality of internal criteria to assess if the application is going into an overload state are satisfied. In response to detecting that the severity of congestion at an application is not higher than the message priority, the
10 processing engine 210 then checks whether a plurality of internal criteria to assess if the application is going into overload state are satisfied. This step involves a more comprehensive check for potential overload within the NF itself. The system 200 likely monitors various internal metrics like CPU usage, memory usage, or number of pending transactions, which are referred to as a plurality of internal criteria. The
15 system 200 may evaluate whether a predefined set of conditions (thresholds) for these internal criteria are met. If none of the internal criteria indicate overload, the method proceeds to process the request message at step 418. However, if any of the internal criteria exceed their thresholds, it suggests the NF itself is approaching overload, and the method might jump to step 406.
20 [0094] At step 418, the method is configured to process, by the processing engine 210, the request message. In response to detecting that the plurality of internal criteria to assess if the application is going into overload state are not satisfied, the processing engine 210 proceeds to process the request message 418. This step only occurs if all previous checks passed, including those for open
25 requests, queue size, data rates, session count, application congestion, and internal criteria. It signifies that the NF can handle the request message without risking overload. The system 200 may now processes the request message as intended, likely performing the requested action. [0095] At step 420, the method is configured to send, by the processing engine
30 210, a response message. This final step involves sending back a response to the requester, completing the transaction. This response message likely contains the
25
results of the requested action or any relevant information for the application. The system 200 may sends the response message back to the application that sent the original request.
[0096] FIG. 5 illustrates an exemplary computer system 500 in which or with 5 which embodiments of the present disclosure may be implemented. The computer system 500 may include an external storage device 510, a bus 520, a main memory 530, a read-only memory 540, a mass storage device 550, a communication port(s) 560, and a processor 570. A person skilled in the art will appreciate that the computer system 500 may include more than one processor and communication
10 ports. The processor 570 may include various modules associated with embodiments of the present disclosure. The communication port(s) 560 may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fibre, a serial port, a parallel port, or other existing or future ports. The communication ports(s) 560 may be chosen
15 depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system 500 connects. [0097] In some embodiments, the main memory 530 may be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory 540 may be any static storage device(s) e.g., but not limited
20 to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor 570. The mass storage device 550 may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment
25 (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
[0098] In some embodiments, the bus 520 may communicatively couple the processor(s) 570 with the other memory, storage, and communication blocks. The
30 bus 520 may be, e.g. a Peripheral Component Interconnect PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB, or the like, for
26
connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor 570 to the computer system 500.
[0099] In another embodiment, operator and administrative interfaces, e.g., a 5 display, keyboard, and cursor control device may also be coupled to the bus 520 to support direct operator interaction with the computer system 500. Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) 560. The components described above are meant only to exemplify various possibilities. In no way should the
10 aforementioned exemplary computer system 500 limit the scope of the present disclosure.
[0100] In accordance with one embodiment of the present disclosure, a user equipment that is communicatively coupled with a network is disclosed. The coupling comprises of receiving, by a processing engine, a request message for
15 open requests, detecting, by the processing engine, whether number of open requests being processed is greater than a threshold of number of requests processed, in response to detecting that the number of open requests being processed are not greater than the threshold of number of requests processed, detecting, by the processor, whether size of a message queue is greater than a
20 threshold of message queue, in response to detecting that the size of the message queue is not greater than the threshold of message queue, detecting, by the processor, whether an ingress rate or an egress rate is greater than a threshold of ingress rate or egress rate, in response to detecting that the ingress rate or the egress rate is not greater than the threshold of ingress rate or egress rate, detecting, by the
25 processor, whether a number of sessions currently in a cache is greater than a threshold of number of sessions, in response to detecting that the number of sessions currently in the cache is not greater than the threshold of number of sessions, detecting, by the processor, whether severity of congestion at an application is higher than a message priority, in response to detecting that the severity of
30 congestion at the application is not higher than the message priority, detecting, by the processor, whether a plurality of internal criteria to assess if the application is
27
going into overload state are satisfied, in response to detecting that the plurality of internal criteria to assess if the application is going into overload state are not satisfied, processing, by the processor, the request message and sending, by the processor, a response message. 5 [0101] While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the
10 disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter to be implemented merely as illustrative of the disclosure and not as limitation.
[0102] The present disclosure provides technical advancement related to protecting network functions (NFs) from overload conditions. Positioned at the NF
15 level within communication networks, it prioritizes and regulates message traffic based on criteria like interface and service. In the present disclosure, key advancements include ingress and egress throttling mechanisms, real-time transaction monitoring, and selective message rejection during congestion to maintain essential operations. The present disclosure also monitors pending
20 transactions, tracks active sessions, and addresses internal issues such as CPU and memory overutilization. The present disclosure ensures robust NF performance under high-demand scenarios and quick recovery to normal operations post-overload.
25 ADVANTAGES OF INVENTION
[0103] The present invention provides a system and a method for protecting a
network function against overload.
[0104] The present invention provides a system and a method for facilitating a
network function to selectively reject incoming traffic during high traffic periods, 30 based on predefined parameters.
28
[0105] The present invention provides a system and a method for facilitating a network function to revert to normal operations after detecting that a period of high traffic is completed or over.
[0106] The present invention provides a system and a method for facilitating a network function to prevent additional overloading traffic to peer nodes. [0107] The present invention provides a system and a method for facilitating a network function to protect key operating resources during periods of high traffic. [0108] The present invention provides a system and a method for facilitating a network function to operate even during internal inconsistencies or problems arising within the network function.
We Claim:
1. A method (400) for protecting a network function (NF) against overload, the method (400) comprising:
receiving, by a processing engine (210), a request message for open requests (402);
detecting, by the processing engine (210), whether number of open requests being processed is greater than a threshold of number of requests processed (404);
in response to detecting that the number of open requests being processed are not greater than the threshold of number of requests processed, detecting, by the processing engine (210), whether size of a message queue is greater than a threshold of message queue (408);
in response to detecting that the size of the message queue is not greater than the threshold of message queue, detecting, by the processing engine (210), whether an ingress rate or an egress rate is greater than a threshold of ingress rate or egress rate (410);
in response to detecting that the ingress rate or the egress rate is not greater than the threshold of ingress rate or egress rate, detecting, by the processing engine (210), whether a number of sessions currently in a cache is greater than a threshold of number of sessions (412);
in response to detecting that the number of sessions currently in the cache is not greater than the threshold of number of sessions, detecting, by the processing engine (210), whether severity of congestion at an application is higher than a message priority (414); in response to detecting that the severity of congestion at the application is not higher than the message priority, detecting, by the processing engine (210), whether a plurality of internal criteria to assess if the application is going into overload state are satisfied (416);
in response to detecting that the plurality of internal criteria to assess if the application is going into overload state are not satisfied, processing, by the processing engine (210), the request message (418); and
sending, by the processing engine (210), a response message (420).
2. The method claimed as in claim 1, further comprising rejecting (406), by
the processing engine (210), the request message if:
in response to detecting that the number of open requests being processed are greater than the threshold of number of requests processing;
in response to detecting that the size of the message queue is greater than the threshold of message queue;
in response to detecting that the ingress rate or the egress rate is greater than the threshold of ingress rate or egress rate;
in response to detecting that the number of sessions currently in the cache is greater than the threshold of number of sessions;
in response to detecting that the severity of congestion at the application is higher than the message priority; and
in response to detecting that the plurality of internal criteria to assess if the application is going into overload state are satisfied.
3. The method claimed as in claim 1, wherein the plurality of internal criteria
includes a central processing unit (CPU) usage, a memory consumption, a
cache consumption, number of threads, a database (DB) size, the number of
sessions, stale sessions, plurality of channels, and plurality of listening
ports.
4. The method as claimed in claim 1, wherein the plurality of NFs is configured to define a plurality of defined message queues for a plurality of defined interfaces with a plurality of defined configurable code.
5. The method as claimed in claim 4, wherein the NF is configured to provide three defined thresholds for raising alarms and corresponding defined alarm thresholds.
6. The method as claimed in claim 1 further comprises rejecting the request message based on priority defined by the NFs or a user defined priority, wherein the user defined priority is based on an interface, a plurality of service operations, plurality of message types, wherein the plurality of message types includes the request message, a message answer or both.
7. A system (200) for protecting a network function (NF) against overload, the system (200) is configured to:
receive, by a processing engine (210), a request message for open requests;
detect, by the processing engine (210), whether number of open requests being processed are greater than a threshold of number of requests processed; and
store, by a database (220), the number of open requests.
8. The system (200) as claimed in claim 7, wherein the processing engine (210)
is further configured to:
in response to detecting that the number of open requests being processed are not greater than the threshold of number of requests processing, detect, whether size of a message queue is greater than a threshold of message queue;
in response to detecting that the size of the message queue is greater than the threshold of message queue, reject, the request;
in response to detecting that the size of the message queue is not greater than the threshold of message queue, detect, whether an ingress rate or an egress rate is greater than a threshold of ingress rate or egress rate;
in response to detecting that the ingress rate or the egress rate is greater than the threshold of ingress rate or egress rate, reject, the request;
in response to detecting that the ingress rate or the egress rate is not greater than the threshold of ingress rate or egress rate, detect, whether number of sessions currently in a cache is greater than a threshold of number of sessions;
in response to detecting that the number of sessions currently in the cache is greater than the threshold of number of sessions, reject, the request;
in response to detecting that the number of sessions currently in the cache is not greater than the threshold of number of sessions, detect, whether severity of congestion at an application is higher than a message priority;
in response to detecting that the severity of congestion at the application is higher than the message priority, reject, the request;
in response to detecting that the severity of congestion at the application is not higher than the message priority, detect, = whether a plurality of internal criteria to assess if the application is going into overload state are satisfied;
in response to detecting that the plurality of internal criteria to assess if the application is going into overload state are satisfied, reject, the request;
in response to detecting that the plurality of internal criteria to assess if the application is going into overload state are not satisfied, process, the request; and
send a response.
9. The system (200) claimed as in claim 8, wherein the plurality of internal criteria includes a central processing unit (CPU) usage, a memory consumption, a cache consumption, number of threads, a database (DB) size, number of sessions, stale sessions, plurality of channels, and plurality of listening ports.
10. The system (200) claimed as in claim 7, wherein the plurality of NFs is configured to define a plurality of defined message queues for a plurality of defined interfaces with a plurality of defined configurable code.
11. The system (200) claimed as in claim 10, wherein the NF is configured to provide three defined thresholds for raising alarms and corresponding defined alarm thresholds.
12. The system (200) claimed as in claim 7 further configured to: reject the request message based on priority defined by the NFs or a user defined priority, wherein the user defined priority is based on an interface, a plurality of service operations, a plurality of message types, and wherein the plurality of message types includes the request message, a message answer or both.
13. A user equipment (UE) (104) communicatively coupled with a network (106), the coupling comprises steps of
receiving, by a processing engine (210), a request message for open requests (402);
detecting, by the processing engine (210), whether number of open requests being processed is greater than a threshold of number of requests processed (404);
in response to detecting that the number of open requests being processed are not greater than the threshold of number of requests processed, detecting, by the processing engine (210),
whether size of a message queue is greater than a threshold of message queue (408);
in response to detecting that the size of the message queue is not greater than the threshold of message queue, detecting, by the processing engine (210), whether an ingress rate or an egress rate is greater than a threshold of ingress rate or egress rate (410);
in response to detecting that the ingress rate or the egress rate is not greater than the threshold of ingress rate or egress rate, detecting, by the processing engine (210), whether a number of sessions currently in a cache is greater than a threshold of number of sessions (412);
in response to detecting that the number of sessions currently in the cache is not greater than the threshold of number of sessions, detecting, by the processing engine (210), whether severity of congestion at an application is higher than a message priority (414);
in response to detecting that the severity of congestion at the application is not higher than the message priority, detecting, by the processing engine (210), whether a plurality of internal criteria to assess if the application is going into overload state are satisfied (416);
in response to detecting that the plurality of internal criteria to assess if the application is going into overload state are not satisfied, processing, by the processing engine (210), the request message (418); and
sending, by the processing engine (210), a response message (420).
| # | Name | Date |
|---|---|---|
| 1 | 202321048664-STATEMENT OF UNDERTAKING (FORM 3) [19-07-2023(online)].pdf | 2023-07-19 |
| 2 | 202321048664-PROVISIONAL SPECIFICATION [19-07-2023(online)].pdf | 2023-07-19 |
| 3 | 202321048664-FORM 1 [19-07-2023(online)].pdf | 2023-07-19 |
| 4 | 202321048664-DRAWINGS [19-07-2023(online)].pdf | 2023-07-19 |
| 5 | 202321048664-DECLARATION OF INVENTORSHIP (FORM 5) [19-07-2023(online)].pdf | 2023-07-19 |
| 6 | 202321048664-FORM-26 [14-09-2023(online)].pdf | 2023-09-14 |
| 7 | 202321048664-FORM-26 [16-10-2023(online)].pdf | 2023-10-16 |
| 8 | 202321048664-FORM-26 [05-04-2024(online)].pdf | 2024-04-05 |
| 9 | 202321048664-FORM 13 [05-04-2024(online)].pdf | 2024-04-05 |
| 10 | 202321048664-AMENDED DOCUMENTS [05-04-2024(online)].pdf | 2024-04-05 |
| 11 | 202321048664-Power of Attorney [04-06-2024(online)].pdf | 2024-06-04 |
| 12 | 202321048664-Covering Letter [04-06-2024(online)].pdf | 2024-06-04 |
| 13 | 202321048664-CORRESPONDANCE-WIPO CERTIFICATE-14-06-2024.pdf | 2024-06-14 |
| 14 | 202321048664-RELEVANT DOCUMENTS [11-07-2024(online)].pdf | 2024-07-11 |
| 15 | 202321048664-FORM 13 [11-07-2024(online)].pdf | 2024-07-11 |
| 16 | 202321048664-FORM-5 [12-07-2024(online)].pdf | 2024-07-12 |
| 17 | 202321048664-DRAWING [12-07-2024(online)].pdf | 2024-07-12 |
| 18 | 202321048664-CORRESPONDENCE-OTHERS [12-07-2024(online)].pdf | 2024-07-12 |
| 19 | 202321048664-COMPLETE SPECIFICATION [12-07-2024(online)].pdf | 2024-07-12 |
| 20 | Abstract-1.jpg | 2024-08-14 |
| 21 | 202321048664-ORIGINAL UR 6(1A) FORM 26-160924.pdf | 2024-09-23 |
| 22 | 202321048664-FORM 18 [30-09-2024(online)].pdf | 2024-09-30 |
| 23 | 202321048664-FORM 3 [04-11-2024(online)].pdf | 2024-11-04 |