Sign In to Follow Application
View All Documents & Correspondence

Method And System For Optimising Latency Associated With A Network

Abstract: The present disclosure relates to a method [200] and a system [100] for optimising latency associated with a network. The disclosure encompasses, receiving, by a transceiver unit [102] from a first network function, a connection request, identifying, by an identification unit [104], a set of network functions based on the connection request, retrieving, by an analysis unit [106], a pending connection request counter data associated with the set of network functions, determining, by the analysis unit [106], a connection traffic flow priority associated with each network function from the set of network functions based on the pending connection request counter data, identifying, by the identification unit [104], a target network function from the set of network functions based on the connection traffic flow priority; and routing, by a routing unit [108] the connection request based on the connection traffic flow priority associated with the target network function. [FIG. 2]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
04 July 2023
Publication Number
47/2024
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-06-09
Renewal Date

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Inventors

1. Sandeep Bisht
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF 1970)
& THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR OPTIMISING LATENCY ASSOCIATED WITH
A NETWORK”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

METHOD AND SYSTEM FOR OPTIMISING LATENCY ASSOCIATED WITH
A NETWORK
FIELD OF THE DISCLOSURE
[001] The present disclosure relates generally to the field of wireless communication systems. More particularly, the present disclosure relates to methods and systems for optimising latency associated with a network.
BACKGROUND
[002] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[003] Wireless communication technology has undergone rapid evolution over the past few decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on analog technology and primarily offered voice services. However, the advent of second-generation (2G) technology introduced digital communication and data services, along with the introduction of text messaging.
[004] The introduction of third generation (3G) technology marked a significant milestone, enabling high-speed internet access, mobile video calling, and location-based services. Subsequently, the fourth generation (4G) technology revolutionized wireless communication with faster data speeds, broader network coverage, and enhanced security features.
[005] In the field of telecommunication networks, one persistent challenge is the issue of average latency. Latency refers to the delay experienced in data transmission between network endpoints, directly impacting the quality and responsiveness of real-time applications. High latency can result in various issues, including delays in voice and video

communication, lag in online gaming, and buffering in streaming services. These problems can lead to a frustrating user experience, hinder productivity, and limit the potential of telecommunication technologies.
[006] Over time, various solutions have been developed to improve the performance of communication devices and optimize the average latency associated with networks. However, existing solutions often face challenges such as ineffective prioritization and routing of data packets based on latency requirements. Additionally, prior solutions lack the ability to dynamically adapt to changing network conditions, resulting in suboptimal latency performance during peak traffic periods or in geographically dispersed networks. Furthermore, existing solutions do not adequately address the impact of network congestion, inefficient data compression techniques, or subpar signal processing algorithms, all of which contribute to latency issues. Moreover, prior art does not sufficiently consider the impact of latency on different types of applications, such as real¬time voice and video communication, where even minimal delays can significantly degrade user experience.
[007] Thus, there exists an imperative need in the art to optimise latency associated with a network, which the present disclosure aims to address.
OBJECTS OF THE DISCLOSURE
[008] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[009] It is an object of the present disclosure to provide a system and a method for optimising latency associated with the network.
[010] It is another object of the present disclosure to provide a solution that identifies a target network function from one or more network functions based on a number of pending requests associated with each network function from the one or more network functions.
[011] It is yet another object of the present disclosure to provide a solution to determine a number of pending requests associated with each network function from the one or more

network functions based on an active stream data associated with each connection point from the one or more connection points present in a network.
[012] It is yet another object of the disclosure to reduce overall latency in 5G core network.
[013] It is yet another object of the disclosure to further reduce pending request queue at network functions (servers) which saves from any type of anomaly at network function (server) end which increases overall request success rate in the system.
[014] It is yet another object of the disclosure to provide a method for optimizing latency that reduces the overall request processing time.
SUMMARY OF THE DISCLOSURE
[015] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[016] An aspect of the present disclosure may relate to a method for optimising latency associated with a network. The method comprises receiving, by a transceiver unit at a Service Communication Proxy (SCP) from a first network function, a connection request. The method further comprises identifying, by an identification unit at the SCP, a set of network functions based on the connection request. Further the method comprises retrieving, by an analysis unit at the SCP, a pending connection request counter data associated with the set of network functions. The method further comprises determining, by the analysis unit at the SCP, a connection traffic flow priority associated with each network function from the set of network functions based on the pending connection request counter data. The method further comprises identifying, by the identification unit at the SCP, a target network function from the set of network functions based on the connection traffic flow priority. The method further comprises routing, by a routing unit from the SCP to the target network function, the connection request based on the connection traffic flow priority associated with the target network function.

[017] In an aspect of the present disclosurethe pending connection request counter data associated with the set of network functions comprises at least a total number of pending connection requests associated with each network function from the set of network functions.
[018] In an aspect of the present disclosure, the connection traffic flow priority is one of a highest connection traffic flow priority and a lowest connection traffic flow priority.
[019] In an aspect of the present disclosure, the highest connection traffic flow priority associated with at least one network function from the set of network functions is determined by the analysis unit in an event the total number of pending connection request associated with said network function is determined as lowest value associated with the total number of pending connection request.
[020] In an aspect of the present disclosure, the lowest connection traffic flow priority associated with at least one network function from the set of network functions is determined by the analysis unit in an event the total number of pending connection request associated with said network function is determined as highest value associated with the total number of pending connection request.
[021] In an aspect of the present disclosure, the target network function from the set of network functions is identified by the identification unit based on at least the highest connection traffic flow priority.
[022] Another aspect of the present disclosure may relate to a system for optimising a latency associated with a network. The system is configured at a Service Communication Proxy (SCP). The system further comprising a transceiver unit, wherein the transceiver unit is configured to receive from a first network function, a connection request. The system further comprising an identification unit, wherein the identification unit is configured to identify a set of network functions based on the connection request. The system further comprising an analysis unit, wherein the analysis unit is configured to retrieve a pending connection request counter data associated with the set of network functions. The analysis unit is further configured to determine a connection traffic flow priority associated with each network function from the set of network functions based on the pending connection

request counter data. Furthermore, the identification unit is further configured to identify a target network function from the set of network functions based on the connection traffic flow priority. The system further comprising a routing unit, wherein the routing unit configured to route to the target network function, the connection request based on the connection traffic flow priority associated with the target network function.
[023] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instruction for optimising latency associated with a network. The instructions include an executable code which, when executed by one or more units of a system, causes a transceiver unit [102] of the system to receive a connection request from a first network function; an identification unit [104] of the system to a set of network functions based on the connection request; an analysis unit [106] of the system to retrieve a pending connection request counter data associated with the set of network functions and to determine a connection traffic flow priority associated with each network function from the set of network functions based on the pending connection request counter data; the identification unit [104] further identify a target network function from the set of network functions based on the connection traffic flow priority and a routing unit [108] of the system to route to the target network function, the connection request based on the connection traffic flow priority associated with the target network function.
BRIEF DESCRIPTION OF DRAWINGS
[024] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[025] FIG. 1A illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture [101].

[026] FIG.1B illustrates an exemplary block diagram of a system [100] for optimising latency associated with a network, in accordance with exemplary embodiments of the present disclosure.
[027] FIG.2 illustrates an exemplary method flow diagram indicating the process [200] for optimising latency associated with a network, in accordance with exemplary embodiments of the present disclosure.
[028] FIG.3 illustrates an exemplary scenario block diagram [300] of a system for maintaining a number of connection request for optimising latency associated with a network, in accordance with exemplary embodiments of the present disclosure.
[029] FIG.4 illustrates an exemplary scenario method flow diagram indicating the process [400] for optimising latency associated with a network, in accordance with exemplary embodiments of the present disclosure.
[030] FIG. 5 illustrates an exemplary block diagram of a computing device [1000] upon which an embodiment of the present disclosure may be implemented.
[031] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[032] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example embodiments of

the present disclosure are described below, as illustrated in various drawings in which like reference numerals refer to the same parts throughout the different drawings.
[033] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[034] It should be noted that the terms "mobile device", "user equipment", "user device", “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the disclosure. These terms are not intended to limit the scope of the disclosure or imply any specific functionality or limitations on the described embodiments. The use of these terms is solely for convenience and clarity of description. The disclosure is not limited to any particular type of device or equipment, and it should be understood that other equivalent terms or variations thereof may be used interchangeably without departing from the scope of the disclosure as defined herein.
[035] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[036] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.

[037] In addition, each block may indicate some of modules, segments, or codes
including one or more executable instructions for executing a specific logical function(s).
Further, functions mentioned in the blocks occur regardless of a sequence in some
5 alternative embodiments. For example, two blocks that are contiguously illustrated may be
simultaneously performed in fact or be performed in a reverse sequence depending on corresponding functions.
[038] One or more modules, units, components (including but not limited to analysis unit,
10 identification unit, alert unit, determination unit and fetching unit) used herein may be
software modules configured via hardware modules/processor, or hardware processors, the
processors being a general-purpose processor, a special purpose processor, a conventional
processor, a digital signal processor, a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
15 Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any
other type of integrated circuits, etc.
[039] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter
20 disclosed herein is not limited by such examples. In addition, any aspect or design described
herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar
25 words are used in either the detailed description or the claims, such terms are intended to
be inclusive—in a manner similar to the term “comprising” as an open transition word— without precluding any additional or other elements.
[040] As used herein, an “electronic device”, or “portable electronic device”, or “user
30 device” or “communication device” or “user equipment” or “device” refers to any
electrical, electronic, electromechanical and computing device. The user device is capable of receiving and/or transmitting one or parameters, performing function/s, communicating with other user devices and/or systems, and transmitting data to the other user devices and/or systems. The user equipment may have a processor, a display, a memory, a battery
9

and an input-means such as a hard keypad and/or a soft keypad. The user equipment may
be capable of operating on any radio access technology including but not limited to IP-
enabled communication, Zig Bee, Bluetooth, Bluetooth Low Energy, Near Field
Communication, Z-Wave, Wi-Fi, Wi-Fi direct, etc. For instance, the user equipment may
5 include, but not limited to, a mobile phone, smartphone, virtual reality (VR) devices,
augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other device as may be obvious to a person skilled in the art for implementation of the features of the present disclosure.
10
[041] Further, the user device may also comprise a “processor” or “processing unit” includes processing unit, wherein processor refers to any logic circuitry for processing instructions. The processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of
15 microprocessors, one or more microprocessors in association with a DSP core, a controller,
a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the
20 processor is a hardware processor.
[042] As portable electronic devices and wireless technologies continue to improve and
grow in popularity, the advancing wireless technologies for data transfer are also expected
to evolve and replace the older generations of technologies. In the field of wireless data
25 communications, the dynamic advancement of various generations of cellular technology
are also seen. The development, in this respect, has been incremental in the order of second generation (2G), third generation (3G), fourth generation (4G), and now fifth generation (5G), and more such generations are expected to continue in the forthcoming time.
30 [043] The 5G core is responsible for managing a wide variety of network functions
within the mobile network that make it possible for users to communicate. The network functions include but not limited to mobility management, authentication, authorization, data management, policy management, and quality of service (QOS) for end users.
10

[044] Radio Access Technology (RAT) refers to the technology used by mobile devices/
user equipment (UE) to connect to a cellular network. It refers to the specific protocol and
standards that govern the way devices communicate with base stations, which are
responsible for providing the wireless connection. Further, each RAT has its own set of
5 protocols and standards for communication, which define the frequency bands, modulation
techniques, and other parameters used for transmitting and receiving data. Examples of
RATs include GSM (Global System for Mobile Communications), CDMA (Code Division
Multiple Access), UMTS (Universal Mobile Telecommunications System), LTE (Long-
Term Evolution), and 5G. The choice of RAT depends on a variety of factors, including
10 the network infrastructure, the available spectrum, and the mobile device's/device's
capabilities. Mobile devices often support multiple RATs, allowing them to connect to different types of networks and provide optimal performance based on the available network resources.
15 [045] It is to be noted that the term “latency” as used herein associated with a
telecommunication network refers to the time delay experienced when data is transmitted from one point to another within the network. Further, it is a crucial measure of network performance and directly affects the user experience in real-time applications such as voice and video communication, online gaming, and streaming services.
20
[046] As used herein, “first network function” and “target network function” refers to a specific network function that is provided within a 5G network architecture such as Access and Mobility Management Function (AMF), Session Management Function (SMF) and User plane function (UPF) etc.
25
[047] As discussed in the background section, the current known solutions for optimising latency associated with the network have several shortcomings such as the lack of adaptability to varying network conditions and traffic patterns. Existing solutions often employ static configurations and routing algorithms, which fail to dynamically respond to
30 changing network loads and congestion levels. Consequently, this leads to suboptimal
latency performance during periods of high demand. Moreover, some prior solutions focus primarily on optimizing latency for specific types of applications, such as data transfer, while neglecting the latency requirements of other critical applications like real-time communication or video streaming. This selective approach limits their overall
11

effectiveness in providing a comprehensive solution. Additionally, the prior known solutions do not adequately address the issue of latency in geographically dispersed networks, where data packets may traverse long distances and encounter additional delays. As a result, latency issues persist in these scenarios. 5
[048] The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing a novel solution for efficiently allocating network function service instances in a network environment to optimise latency.
10 [049] Further, the present solution provides a significant technical advancement by
optimizing latency within the network via intelligently distributing incoming connection requests based on the number of pending requests, the solution significantly minimizes delays and enhances the overall responsiveness of the network. This optimization strategy enhances the user experience, reduces the processing time, and further improves the
15 throughput within the network infrastructure. Moreover, the ability of present solution to
intelligently allocate resources based on real-time data contributes to the efficient utilization of network function service instances, resulting in a technically superior and high-performing network environment.
20 [050] In 5G core Network, Service Communication Proxy (SCP) is used for selection
and Routing of request from consumer NF (client) to producer NF (server). The SCP decides the appropriate producer Network Function for the requests so as to reduce overall latency. For 5G core Network, the system and method of the present disclosure may be implemented via the SCP, however the implementation may also be used for any Hypertext
25 Transfer Protocol 2 (http2) based client-server-based communication in direct mode or an
indirect communication between client & server involving proxies. Further, in the present disclosure, the one or more network functions whose response is received in less time gets share of load in comparison to the one or more network functions whose response is received late to send response to the SCP. Also, the SCP maintains a counter for number
30 of pending requests that are derived from one or more current active streams for each
connection.
[051] The present disclosure provides a solution for optimising latency associated with a network that comprises receiving, by a transceiver unit at a Service Communication Proxy
12

(SCP) from a first network function, a connection request. Thereafter, a set of network
functions are identified by an identification unit based on the connection request. Further,
an analysis unit retrieves a pending connection request counter data associated with the set
of network functions. Thereafter, a connection traffic flow priority associated with each
5 network function is determined from the set of network functions based on the pending
connection request counter data via a determination unit. Further, upon determination of the connection traffic flow priority, the identification unit identifies a target network function from the set of network functions based on the connection traffic flow priority. Additionally, the connection request based on the connection traffic flow priority associated
10 with the target network function are routed via a routing unit from the SCP to the target
network function, thereby optimizing latency associated with the network. Overall, the present solution provides an efficient method for allocating network function service instances in a network, considering pending requests and optimizing latency, thereby improving the performance and responsiveness of the network.
15
[052] Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
[053] Referring to FIG. 1A, an exemplary block diagram representation of 5th generation
20 core (5GC) network architecture is shown. As shown in FIG. 1A, the 5GC network
architecture [101] includes a user equipment (UE) [101a], a radio access network (RAN)
[101b], and a 5G Core Network and a Data Network (DN) [101p]. The 5G Core Network
includes an access and mobility management function (AMF) [101c], a Session
Management Function (SMF) [101d], a Service Communication Proxy (SCP) [101e], an
25 Authentication Server Function (AUSF) [101f], a Network Slice Specific Authentication
and Authorization Function (NSSAAF) [101g], a Network Slice Selection Function
(NSSF) [101h], a Network Exposure Function (NEF) [101i], a Network Repository
Function (NRF) [101j], a Policy Control Function (PCF) [101k], a Unified Data
Management (UDM) [101l], an application function (AF) [101m], and a User Plane
30 Function (UPF) [101n].
[054] The User Equipment (UE) [101a] interfaces with the network via the Radio Access Network (RAN) [101b]. The RAN [101b] in the 5G architecture is also called as New Radio or nG-RAN, and these terms may be interchangeably used herein. Radio Access Network
13

(RAN) [101b] is the part of a mobile telecommunications system that connects user equipment (UE) [101a] to the core network (CN) and provides access to different types of networks (e.g., 5G, LTE). It consists of radio base stations and the radio access technologies that enable wireless communication. 5
[055] The Access and Mobility Management Function (AMF) [101c] manages
connectivity and mobility. When a UE [101a] is active, i.e. it is interacting with the 5G
network, e.g., by using data/ call functionalities, the AMF [101c] knows and maintains the
location of the UE [101a] within the network. The AMF [101c] is configured to maintain
10 the tracking area or registration area of the UE [101a], in case the UE is inactive. The AMF
[101c]is configured to communicate with other network functions/ elements such as the Session Management Function (SMF) [101d], etc. to ensure that the UE [101a] is allowed and is able to avail the services by the network.
15 [056] Particularly, the Access and Mobility Management Function (AMF) [101c] is a 5G
core network function responsible for managing access and mobility aspects, such as UE registration, connection, and reachability etc. It also handles mobility management procedures like handovers and paging.
20 [057] The Session Management Function (SMF) [101d] is a 5G core network function
responsible for managing session-related aspects, such as establishing, modifying, and releasing sessions. It coordinates with the User Plane Function (UPF) for data forwarding and handles IP address allocation and QoS enforcement.
25 [058] The Service Communication Proxy (SCP) [101e] is a network function in the 5G
core that facilitates communication between other network functions by providing a secure and efficient messaging service. It acts as a mediator for service-based interfaces.
[059] The Authentication Server Function (AUSF) [101f] is a network function in the
30 5G core responsible for authenticating UEs during registration and providing security
services. It generates and verifies authentication vectors and tokens.
[060] The Network Slice Specific Authentication and Authorization Function (NSSAAF) [101g] is a network function that provides authentication and authorization
14

services specific to network slices. It ensures that UEs can access only the slices for which they are authorized.
[061] The Network Slice Selection Function (NSSF) [101h] is a network function
5 responsible for selecting the appropriate network slice for a UE based on factors such as
subscription, requested services, and network policies.
[062] The Network Exposure Function (NEF) [101i] is a network function that exposes
capabilities and services of the 5G network to external applications, enabling integration
10 with third-party services and applications.
[063] The Network Repository Function (NRF) [101j] is a network function that acts as a central repository for information about available network functions and services. It facilitates the discovery and dynamic registration of network functions. 15
[064] The Policy Control Function (PCF) [101k] is a network function responsible for policy control decisions, such as QoS, charging, and access control, based on subscriber information and network policies.
20 [065] The Unified Data Management (UDM) [101l] is a network function that centralizes
the management of subscriber data, including authentication, authorization, and subscription information.
[066] The Application Function (AF) [101m] is a network function that represents
25 external applications interfacing with the 5G core network to access network capabilities
and services.
[067] The User Plane Function (UPF) [101n] is a network function responsible for handling user data traffic, including packet routing, forwarding, and QoS enforcement. 30
[068] The Data Network (DN) [101p] represents external networks or services that users connect to through the mobile network, such as the internet or enterprise networks.
15

[069] Referring to FIG.1B, an exemplary block diagram of a system [100] for optimising
a latency associated with a network is shown, in accordance with the exemplary
embodiments of the present disclosure. The system [100] comprises a transceiver unit
[102], an identification unit [104], an analysis unit [106] and a routing unit [108]. Also, all
5 of the components/ units of the system [100] are assumed to be connected to each other
unless otherwise indicated below. Also, in FIG. 1B only a few units are shown, however, the system [100] may comprise multiple such units or the system [100] may comprise any such numbers of said units, as required to implement the features of the present disclosure.
10 [070] Additionally, the identification unit [104], the analysis unit [106] and the routing
unit [108] are processors. The processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP (digital signal processor) core, a controller, a microcontroller, Application Specific Integrated Circuits,
15 Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
[071] Also, the transceiver unit [102] includes a transmitter having capabilities to transmit data/signals and optionally also a receiver unit having capabilities to receive data/signals.
20
[072] The system [100] is configured at network node (such as at a Service Communication Proxy (SCP)) of a network for optimising latency associated with the network, with the help of the interconnection between the components/units of the system [100].
25
[073] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various the components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof
30 are within the scope of the disclosure. The functionality of specific units as disclosed in the
disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
16

[074] Particularly, according to the present disclosure for optimising latency associated
with a network, the system [100] is configured at Service Communication Proxy (SCP)
node of the network. Initially, the transceiver unit [102] of the system [100] is configured
5 to receive at the Service Communication Proxy (SCP) from a first network function, a
connection request. The first network function may refer to a specific network function that
is provided within a 5G network architecture such as Access and Mobility Management
Function (AMF), Session Management Function (SMF) or User plane function (UPF), etc.
The connection request is received to initiate a connection between a consumer network
10 function (i.e. client or user) to a producer network function (i.e. server).
[075] Upon receiving the connection request, the identification unit [104] that is
connected at least with the transceiver unit [102], identifies a set of network functions based
on the connection request. For example, to identify the set of network functions, the
15 identification unit [104] is configured to analyse the connection request by using a set of
parameters like protocol type, source and destination addresses, and/or payload content to extract the set of network functions etc.
[076] For example, a Hypertext Transfer Protocol (HTTP) i.e., the protocol type, include
20 one or more path headers (contexts), wherein each path represents a service belonging to a
NF. For instance, in HTTP2 header include one or more Public Land Mobile Network (PLMN) that identifies circle names. Also, the NF registrations & NF profile are stored at the SCP controller shared with the SCP instances, based on received NF profile, thereafter the SCP identify where to route the received request. 25
[077] The set of network functions refers to one or more individual network functions
that are working independently or in connection with other network functions within a
network architecture. For example, the set of network functions may include at least one of
a User Plane Function (UPF), Session Management Function (SMF), Access and Mobility
30 Management Function (AMF), Policy Control Function (PCF) and Network Exposure
Function (NEF).
[078] Thereafter, upon identification of the set of network functions, the analysis unit [106] that is connected at least with the identification unit [104] is configured to retrieve a
17

pending connection request counter data associated with the set of network functions. For example, the analysis unit [106] process one or more details associated with the set of network functions to extract the pending connection request counter data.
5 [079] Additionally, the present disclosure encompasses that the pending connection
request counter data associated with the set of network functions comprises at least a total number of pending connection requests associated with each network function from the set of network functions.
10 [080] Furthermore, the analysis unit [106] is also configured to determine a connection
traffic flow priority associated with each network function from the set of network functions based on the pending connection request counter data. Additionally, the present disclosure encompasses that the connection traffic flow priority is one of a highest connection traffic flow priority and a lowest connection traffic flow priority.
15
[081] Further, the connection traffic flow priority refers to an urgency assigned to different network functions based on their respective pending connection request loads. This priority determines the order in which connection requests are processed or allocated network resources, with higher-priority functions receiving preferential treatment over
20 lower-priority ones.
[082] Furthermore, the present disclosure encompasses that the analysis unit [106] is
configured to determine the highest connection traffic flow priority associated with at least
one network function from the set of network functions, in an event the total number of
25 pending connection request associated with said network function is determined as lowest
value associated with the total number of pending connection request.
[083] Also, the present disclosure encompasses that the analysis unit [106] is configured
to determine the lowest connection traffic flow priority associated with at least one network
30 function from the set of network functions, in an event the total number of pending
connection request associated with said network function is determined as highest value associated with the total number of pending connection request.
18

[084] In addition to this, the identification unit [104] is further configured to identify a
target network function from the set of network functions based on the connection traffic
flow priority. Further, the present disclosure encompasses that the identification unit [104]
is configured to identify the target network function from the set of network functions,
5 based on at least the highest connection traffic flow priority.
[085] Further, upon identification of the target network function from the set of network
functions based on the connection traffic flow priority, the routing unit [108] routes to target
network function, the connection request based on the connection traffic flow priority
10 associated with the target network function. Additionally, the routing unit [108] is
connected at least to the analysis unit [106] and the identification unit [104].
[086] For example, if the system [100] of the present disclosure is implemented in a 5G network with two network instances i.e. network instance A and network instance B which
15 have same function type. The analysis unit [106] determines the traffic flow priority based
on a pending connection request counter request data. If the network instance A has large number of pending request and the network instance B has a lesser number of pending requests. The analysis unit [106] shall assign the highest priority to the network instance B, as the network instance B has fewest pending requests.
20
[087] Referring to FIG.2, an exemplary method flow diagram [200] for optimising latency associated with a network, in accordance with exemplary embodiments of the present disclosure is shown. The disclosure encompasses that the method [200] is performed by the system [100] at Service Communication Proxy (SCP) node. disclosure
25 Also, as shown in FIG.2, the method [200] starts at step [202].
[088] At step [204], the method [200] as disclosed by the present disclosure comprises
receiving, by a transceiver unit [102] at the Service Communication Proxy (SCP) from a
first network function, a connection request. The first network function may refer to a
30 specific network function that is provided within a 5G network architecture such as Access
and Mobility Management Function (AMF), Session Management Function (SMF) or User plane function (UPF) etc. The connection request is received to initiate a connection between a consumer network function (i.e. client or user) to a producer network function (i.e. server).
19

[089] At step [206], the method [200] as disclosed by the present disclosure comprises
identifying, by the identification unit [104] at the SCP, a set of network functions associated
with the network based on the connection request. The set of network functions refers to a
5 collection or group of individual network functions within a network architecture. For
instance, in a 5G network architecture, the set of network functions may be selected from a User Plane Function (UPF), Session Management Function (SMF), Access and Mobility Management Function (AMF), Policy Control Function (PCF) and Network Exposure Function (NEF). 10
[090] For example, the identification unit [104] may analyse the connection request by using a set of parameters like protocol type, source and destination addresses, and payload content to extract the set of network functions.
15 [091] At step [208], the method [200] as disclosed by the present disclosure comprises
retrieving, by analysis unit [106] at the SCP a pending connection request counter data associated with the set of network functions.
[092] Further, the present disclosure encompasses that the pending connection request
20 counter data associated with the set of network functions comprises at least a total number
of pending connection request associated with each network function from the set of network functions.
[093] For example, the analysis unit [106] processes the set of network functions to
25 extract the pending connection request counter data.
[094] At step [210], the method [200] as disclosed by the present disclosure comprises
determining, by the analysis unit [106] at the SCP in the network, a connection traffic flow
priority associated with each network function from the set of network functions based on
30 the pending connection request counter data.
[095] Further, the connection traffic flow priority refers to an urgency assigned to different network functions based on their respective pending connection request loads. This priority determines the order in which connection requests are processed or allocated
20

network resources, with higher-priority functions receiving preferential treatment over lower-priority ones.
[096] The present disclosure encompasses that, the connection traffic flow priority is one
5 of a highest connection traffic flow priority and a lowest connection traffic flow priority.
[097] Further the present disclosure encompasses that the highest connection traffic flow
priority associated with at least one network function from the set of network functions is
determined by the analysis unit [106], in an event the total number of pending connection
10 request associated with said network function is determined as lowest value associated with
the total number of pending connection request.
[098] Also, the lowest connection traffic flow priority associated with at least one
network function from the set of network functions is determined by the analysis unit [106],
15 in an event the total number of pending connection request associated with said network
function is determined as highest value associated with the total number of pending connection request.
[099] At step [212], the method [200] as disclosed by the present disclosure comprises
20 identifying, by the identification unit [104] at the SCP in the network, a target network
function from the set of network functions based on the connection traffic flow priority. The present disclosure encompasses that the target network function from the set of network functions is identified by the identification unit [104] based on at least the highest connection traffic flow priority. 25
[100] Further, upon identification of the target network function from the set of network
functions based on the connection traffic flow priority, the method proceeds to step [214],
in which the method [200] as disclosed by the present disclosure comprises routing, by a
routing unit [108] from the SCP in the network to the target network function, the
30 connection request based on the connection traffic flow priority associated with the target
network function.
[101] Thereafter, the method [200] terminates at step [216].
21

[102] Referring to FIG.3, an exemplary scenario block diagram of a system for
maintaining a number of connection requests for optimising latency associated with a
network is shown, in accordance with exemplary embodiments of the present disclosure.
Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise
5 multiple such units or the system [300] may comprise any such numbers of said units, as
required to implement the features of the present disclosure in 5G architecture. The disclosure encompasses that, the system [300] as depicted in FIG.3 works in conjunction with system [100] as depicted in FIG.1B to perform the method [200] as depicted in FIG.2.
10 [103] FIG. 3 depicts an exemplary scenario where a 5G NF consumer [302] (i.e. client
or consumer) sends a connection request at a 5G Service Communication Proxy (SCP) [304] for requesting a service from at least one 5G NF producer [306, 308]). Further, the 5G SCP [304] maintains a counter for number of pending requests derived from current active streams for each connection. For every endpoint, the pending request is calculated
15 by summing number of active streams of all the connection established to a particular
endpoint. Further, basis the pending request of each endpoint, client decides to send the request to the endpoint having lowest pending request. When the pending request queue size is low, the traffic flow has the highest priority (as shown between 5G SCP-Proxy [304] and 5G NF Producer Instance A [306]) and when the pending request queue size is high,
20 the traffic flow has the lowest priority (as shown between 5G SCP-Proxy [304] and 5G NF
Producer Instance B [308]).
[104] Further, there are multiple endpoints at each producer instance and each endpoint
is related to specific service provided by a producer. For instance, a NF producer such as
25 5G NF producer [306, 308] has a plurality of endpoints for each service. When a consumer
requests the service of producer using endpoint, the SCP decides endpoint based on queue at the endpoint of each Producer instances (A and B).
[105] Referring to FIG.4 an exemplary scenario method flow diagram [400], for
30 optimising latency associated with a network, in accordance with exemplary embodiments
of the present disclosure is shown. The present disclosure encompasses that the method [400] is performed by the system [100]. As shown in FIG.4, the method [400] starts at step [402].
22

[106] At step [404], the method [400] as disclosed by the present disclosure comprises
receiving, by a transceiver unit [102], a target network function service instance request
(similar to the connection request) associated with a network. For example, the transceiver
unit [102] receives a request from a 5G Network Function (NF) consumer for accessing a
5 service a 5G NF producer (i.e. service provider).
[107] Next, at step [406], the method [400] as disclosed by the present disclosure
comprises identifying, via an identification unit [104], a network function data associated
with one or more network functions, wherein the network function data comprises at least
10 a network function ID associated with each network function from the one or more network
functions. Further, network function ID refers to a unique identifier that helps in uniquely identifying and distinguishing different network functions from each other.
[108] The present disclosure encompasses that, each network function from the one or
15 more network functions is associated with at least a network function service instance.
Further, the network service instance provides services in a network.
[109] For instance, a service named “Nudm_UECM” provides the NF consumer of the
information related to the User Equipment (UE)’s transaction information. For example,
20 UE’s serving NF identifier, UE status and alike. Further the Nudm_UECM (network
function service instance) allows the NF consumer to register and deregister the information for serving UE in the UDM.
[110] Next, at step [408], the method [400] as disclosed by the present disclosure
25 comprises receiving, by an analysis unit [106], a data associated with a set of connection
point, wherein each connection point from the set of connection point is associated with at least one network function from the one or more network functions.
[111] Next, at step [410], the method [400] as disclosed by the present disclosure
30 comprises retrieving via an analysis unit [106] a connection point data associated with each
connection point from the one or more connection points. The connection point data comprises at least an active stream data associated with each connection point from the one or more connection points. Further, active stream data comprises number of pending
23

network function service instances associated with each connection point from the one or more connection points.
[112] Next, at step [412], the method [400] as disclosed by the present disclosure
5 comprises determining, by the analysis unit [106], a number of pending requests associated
with each network function from the one or more network functions based on the active
stream data associated with each connection point from the one or more connection points.
[113] The present disclosure encompasses that the number of pending requests is
10 associated with a number of pending network function service instances associated with
each network function from the one or more network functions.
[114] Next, at step [414], the method [400] as disclosed by the present disclosure
comprises identifying, by the identification unit [104], a target network function from the
15 one or more network functions based on the number of pending requests associated with
each network function from the one or more network functions. For example, the identification unit [104] may utilize the network function data that was identified at step [406] for identification of the target network function.
20 [115] Next, at step [416], the method [400] as disclosed by the present disclosure
comprises allocating, by a routing unit [108], the target network function service instance to the target network function from the one or more network functions.
[116] Thereafter, the method [400] terminates at step [418].
25
[117] FIG. 5 illustrates an exemplary block diagram of a computing device [1000] (also referred herein as computing system [1000]) upon which an embodiment of the present disclosure may be implemented. In an implementation, the computing device [1000] implements the method [200] for optimising latency associated with a network using the
30 system [100]. In another implementation, the computing device [1000] itself implements
the method [200] for optimising latency associated with a network using one or more units configured within the computing device [1000], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
24

[118] The computing device [1000] may include a bus [1002] or other communication
mechanism for communicating information, and a hardware processor [1004] coupled with
the bus [1002] for processing information. The hardware processor [1004] may be, for
example, a general-purpose microprocessor. The computing device [1000] may also
5 include a main memory [1006], such as a random-access memory (RAM), or other dynamic
storage device, coupled to the bus [1002] for storing information and instructions to be executed by the processor [1004]. The main memory [1006] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [1004]. Such instructions, when stored in non-transitory
10 storage media accessible to the processor [1004], render the computing device [1000] into
a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [1000] further includes a read only memory (ROM) [1008] or other static storage device coupled to the bus [1002] for storing static information and instructions for the processor [1004].
15
[119] A storage device [1010], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [1002] for storing information and instructions. The computing device [1000] may be coupled via the bus [1002] to a display [1012], such as a cathode ray tube (CRT), for displaying information to a computer user. An input device
20 [1014], including alphanumeric and other keys, may be coupled to the bus [1002] for
communicating information and command selections to the processor [1004]. Another type of user input device may be a cursor controller [1016], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [1004], and for controlling cursor movement on the display [1012]. This input
25 device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second
axis (e.g., y), that allow the device to specify positions in a plane.
[120] The computing device [1000] may implement the techniques described herein
using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or
30 program logic which in combination with the computing device [1000] causes or programs
the computing device [1000] to be a special-purpose machine. According to one embodiment, the techniques herein are performed by the computing device [1000] in response to the processor [1004] executing one or more sequences of one or more instructions contained in the main memory [1006]. Such instructions may be read into the
25

main memory [1006] from another storage medium, such as the storage device [1010].
Execution of the sequences of instructions contained in the main memory [1006] causes
the processor [1004] to perform the process steps described herein. In alternative
embodiments, hard-wired circuitry may be used in place of or in combination with software
5 instructions.
[121] The computing device [1000] also may include a communication interface [1018] coupled to the bus [1002]. The communication interface [1018] provides a two-way data communication coupling to a network link [1020] that is connected to a local network
10 [1022]. For example, the communication interface [1018] may be an integrated services
digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [1018] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be
15 implemented. In any such implementation, the communication interface [1018] sends and
receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
[122] The computing device [1000] can send messages and receive data, including
20 program code, through the network(s), the network link [1020] and the communication
interface 1018. In the Internet example, a server [1030] might transmit a requested code for
an application program through the Internet [1028], the ISP [1026], the host [1024], the
local network [1022] and the communication interface [1018]. The received code may be
executed by the processor [1004] as it is received, and/or stored in the storage device 1010,
25 or other non-volatile storage for later execution.
[123] Also, the present disclosure may relate to a non-transitory computer readable
storage medium storing instruction for optimising latency associated with a network. The
instructions include an executable code which, when executed by one or more units of a
30 system, causes a transceiver unit of the system to receive a connection request from a first
network function; an identification unit of the system to a set of network functions based on the connection request; an analysis unit of the system to retrieve a pending connection request counter data associated with the set of network functions and to determine a connection traffic flow priority associated with each network function from the set of
26

network functions based on the pending connection request counter data; the identification
unit further identify a target network function from the set of network functions based on
the connection traffic flow priority and a routing unit of the system to route to the target
network function, the connection request based on the connection traffic flow priority
5 associated with the target network function.
[124] Moreover, it is pertinent to note that the method and system as encompassed by the present disclosure may be applicable in a telecommunication organization that is responsible for handling a large volume of voice calls, video calls, and data transfers. The
10 present disclosure may assist in optimizing latency and ensuring a smooth communication
experience for users. For instance, in an exemplary scenario, initially, the SCP in the 5G network receives one or more connection requests via the transceiver unit [102] of the system [100]. As users initiate one or more calls or data transfers, the user-devices send the connection requests to the network that is managed by the telecommunication organization.
15 Further, the identification unit [104] of the system [100] analyses these connection requests
and identifies various network functions required to establish and maintain communication sessions. These functions may include routing calls, managing data transfers, and handling signalling protocols. Further the analysis unit [106] of the system [100] retrieves data of pending connection requests associated with each network function. The analysis unit [106]
20 may also monitor the number of active calls, ongoing data transfers, and queued requests
waiting, to be processed. Based on the processing, the analysis unit [106] determines the traffic flow priority for each network function. For example, if there’s a surge in voice call requests during peak hours, prioritizing voice call routing becomes essential to minimize call setup time and ensure clear communication. Using this prioritization, the identification
25 unit [104] selects the target network function with the highest priority. For instance, if
there’s congestion in the voice call routing function, prioritizing this function ensures that incoming voice calls are promptly routed without delay. Finally, the routing unit [108] of the system [100] directs incoming connection requests to the target network function based on its priority. If voice call routing has the highest priority, the routing unit [108] ensures
30 that new voice call requests are expedited to the routing function, optimizing latency and
maintaining call quality. Hence, by implementing the method and system of the present disclosure, the telecommunication networks may assist in efficiently managing network traffic, prioritizing critical communication functions, and optimizing latency to deliver a seamless communication experience for users.
27

[125] As is evident from the above, the present disclosure provides a technically
advanced solution in the field of network function service instance allocation. By
introducing a novel approach that considers the number of pending requests associated with
5 each network function, the solution enables more efficient resource utilization and
improved network performance. The utilization of active stream data and network function IDs enhances the accuracy and effectiveness of the allocation process. This technical advancement addresses the challenges of latency optimization in network environments, providing a more streamlined and responsive system. Furthermore, by considering the
10 handshake limit for total pending requests in each client-server communication for Http2
connections, the present disclosure aims to mitigate scenarios where the server becomes over-loaded by pending requests. The present disclosure helps prevent instances where the server reaches its maximum capacity for pending requests, thereby avoiding the need for the client to discard additional request messages.
15
[126] Further, the present solution provides a technical effect by providing a solution for optimization of latency within a network by dynamically allocating the target network function service instance to the appropriate network function based on the number of pending requests, therefore the solution effectively reduces delays and enhances the overall
20 responsiveness of the network. This optimization leads to improved user experience,
reduced processing times, and increased throughput within the network infrastructure. The solution's ability to intelligently allocate resources based on real-time data contributes to the efficient utilization of network function service instances, resulting in a technically superior and high-performing network environment.
25
[127] While considerable emphasis has been placed herein on the disclosed embodiments, it will be appreciated that many embodiments can be made and that many changes can be made to the embodiments without departing from the principles of the present disclosure. These and other changes in the embodiments of the present disclosure
30 will be apparent to those skilled in the art, whereby it is to be understood that the foregoing
descriptive matter to be implemented is illustrative and non-limiting.
28

I/We claim:
1. A method [200] for optimising latency associated with a network, the method
comprising:
- receiving, by a transceiver unit [102] at a Service Communication Proxy (SCP)
from a first network function, a connection request;
- identifying, by an identification unit [104] at the SCP, a set of network functions based on the connection request;
- retrieving, by an analysis unit [106] at the SCP, a pending connection request counter data associated with the set of network functions;
- determining, by the analysis unit [106] at the SCP, a connection traffic flow priority associated with each network function from the set of network functions based on the pending connection request counter data;
- identifying, by the identification unit [104] at the SCP, a target network function from the set of network functions based on the connection traffic flow priority; and
- routing, by a routing unit [108] from the SCP to the target network function, the connection request based on the connection traffic flow priority associated with the target network function.

2. The method [200] as claimed in claim 1, wherein the pending connection request counter data associated with the set of network functions comprises at least a total number of pending connection requests associated with each network function from the set of network functions.
3. The method [200] as claimed in claim 1, wherein the connection traffic flow priority is one of a highest connection traffic flow priority and a lowest connection traffic flow priority.
4. The method [200] as claimed in claim 3, wherein the highest connection traffic flow priority associated with at least one network function from the set of network functions is determined by the analysis unit [106], in an event the total number of pending connection requests associated with said network function is determined as lowest value associated with the total number of pending connection requests.

5. The method [200] as claimed in claim 3, wherein the lowest connection traffic flow priority associated with at least one network function from the set of network functions is determined by the analysis unit [106], in an event the total number of pending connection requests associated with said network function is determined as highest value associated with the total number of pending connection requests.
6. The method [200] as claimed in claim 3, wherein the target network function from the set of network functions is identified by the identification unit [104] based on at least the highest connection traffic flow priority.
7. A system [100] for optimising a latency associated with a network, the system [100] is configured at a Service Communication Proxy (SCP), the system further comprising:

- a transceiver unit [102], wherein the transceiver unit [102] is configured to receive from a first network function, a connection request;
- an identification unit [104] connected at least to the transceiver unit [102], wherein the identification unit [104] is configured to identify a set of network functions based on the connection request;
- an analysis unit [106], connected at least to the identification unit [104], wherein the analysis unit [106] is configured to:
o retrieve a pending connection request counter data associated with the set
of network functions, and o determine a connection traffic flow priority associated with each network
function from the set of network functions based on the pending
connection request counter data,
wherein the identification unit [104] is further configured to
identify a target network function from the set of network functions based
on the connection traffic flow priority; and
- a routing unit [108], connected at least to the analysis unit [106] and the
identification unit [104], the routing unit [108] is configured to route to the target
network function, the connection request based on the connection traffic flow
priority associated with the target network function.
8. The system [100] as claimed in claim 7, wherein the pending connection request
counter data associated with the set of network functions comprises at least a total

number of pending connection requests associated with each network function from the set of network functions.
9. The system [100] as claimed in claim 7, wherein the connection traffic flow priority is one of a highest connection traffic flow priority and a lowest connection traffic flow priority.
10. The system [100] as claimed in claim 9, wherein the analysis unit [106] is configured to determine the highest connection traffic flow priority associated with at least one network function from the set of network functions, in an event the total number of pending connection requests associated with said network function is determined as lowest value associated with the total number of pending connection requests.
11. The system [100] as claimed in claim 9, wherein the analysis unit [106] is configured to determine the lowest connection traffic flow priority associated with at least one network function from the set of network functions, in an event the total number of pending connection requests associated with said network function is determined as highest value associated with the total number of pending connection requests.
12. The system [100] as claimed in claim 9, wherein the identification unit [104] is configured to identify the target network function from the set of network functions, based on at least the highest connection traffic flow priority.

Documents

Application Documents

# Name Date
1 202321044640-STATEMENT OF UNDERTAKING (FORM 3) [04-07-2023(online)].pdf 2023-07-04
2 202321044640-PROVISIONAL SPECIFICATION [04-07-2023(online)].pdf 2023-07-04
3 202321044640-FORM 1 [04-07-2023(online)].pdf 2023-07-04
4 202321044640-FIGURE OF ABSTRACT [04-07-2023(online)].pdf 2023-07-04
5 202321044640-DRAWINGS [04-07-2023(online)].pdf 2023-07-04
6 202321044640-FORM-26 [06-09-2023(online)].pdf 2023-09-06
7 202321044640-Proof of Right [17-10-2023(online)].pdf 2023-10-17
8 202321044640-ORIGINAL UR 6(1A) FORM 1 & 26)-301123.pdf 2023-12-07
9 202321044640-ENDORSEMENT BY INVENTORS [10-06-2024(online)].pdf 2024-06-10
10 202321044640-DRAWING [10-06-2024(online)].pdf 2024-06-10
11 202321044640-CORRESPONDENCE-OTHERS [10-06-2024(online)].pdf 2024-06-10
12 202321044640-COMPLETE SPECIFICATION [10-06-2024(online)].pdf 2024-06-10
13 Abstract1.jpg 2024-07-06
14 202321044640-FORM 3 [31-07-2024(online)].pdf 2024-07-31
15 202321044640-Request Letter-Correspondence [13-08-2024(online)].pdf 2024-08-13
16 202321044640-Power of Attorney [13-08-2024(online)].pdf 2024-08-13
17 202321044640-Form 1 (Submitted on date of filing) [13-08-2024(online)].pdf 2024-08-13
18 202321044640-Covering Letter [13-08-2024(online)].pdf 2024-08-13
19 202321044640-CERTIFIED COPIES TRANSMISSION TO IB [13-08-2024(online)].pdf 2024-08-13
20 202321044640-FORM-9 [19-11-2024(online)].pdf 2024-11-19
21 202321044640-FORM 18A [19-11-2024(online)].pdf 2024-11-19
22 202321044640-FER.pdf 2024-12-11
23 202321044640-FORM 3 [29-01-2025(online)].pdf 2025-01-29
24 202321044640-FER_SER_REPLY [05-02-2025(online)].pdf 2025-02-05
25 202321044640-PatentCertificate09-06-2025.pdf 2025-06-09
26 202321044640-IntimationOfGrant09-06-2025.pdf 2025-06-09

Search Strategy

1 SearchHistory4640E_10-12-2024.pdf

ERegister / Renewals

3rd: 08 Sep 2025

From 04/07/2025 - To 04/07/2026