Abstract: The present disclosure relates to a method and a system for automatically diverting a network traffic to a time efficient path. The disclosure encompasses fetching, at a Service Communication Proxy performance automated intelligence (SCP-pAI) engine [301], a set of statistics data associated with traffic route paths; identifying one or more time efficient paths based on the set of statistics data; determining a target time efficient paths from the one or more time efficient paths based on the set of statistics data; and automatically facilitating routing of the network traffic via a target time efficient path. [Figure 4]
FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR AUTOMATICALLY DIVERTING A NETWORK TRAFFIC TO A TIME EFFICIENT PATH”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
METHOD AND SYSTEM FOR AUTOMATICALLY DIVERTING A NETWORK TRAFFIC
TO A TIME EFFICIENT PATH
TECHNICAL FIELD
[0001] Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to systems and methods for automatically diverting a network traffic to a time efficient path.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past few decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on analog technology and offered only voice services. However, with the advent of the second-generation (2G) technology, digital communication and data services became possible, and text messaging was introduced. The third-generation (3G) technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth-generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security. Currently, the fifth-generation (5G) technology is being deployed, promising even faster data speeds, low latency, and the ability to connect multiple devices simultaneously. With each generation, wireless communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0004] In a 5G cellular communication, a re-routing of the traffic load is needed due to various factors such as to latency factor, load factor, bandwidth factor, etc. to make sure that communication network is stable and running smoothly. A Service Communication Proxy (SCP) is a solution deployed along side of 5G Network Functions (NF) for providing routing control, resiliency, and observability to the core network. The rerouting of the traffic load is done from one SCP to another SCP.
[0005] Further, over the period of time various solutions have been developed to improve the performance of communication devices to provide a trained model for intelligent route recommendation
in a cellular communication. However, there are certain challenges with existing solutions. In the existing art, the rerouting of the traffic load is done by manually checking and verifying the existing traffic load on one or more SCPs installed at various geographic locations to understand the latency factor, load factor, bandwidth factor, etc. at the one or more SCPs to identify an ideal SCP to divert the traffic load. This manual checking and verification of the one or more SCPs installed at various geographic locations is a time-consuming, cumbersome, and error-prone process. Further, it is difficult and cumbersome to analyze all the parameters required to select an ideal route to divert the traffic load.
[0006] Thus, there exists an imperative need in the art to provide a trained model for intelligent route recommendation in a cellular communication to make sure that communication network is stable and running smoothly and a solution for automatically diverting a network traffic to a time efficient path, which the present disclosure aims to address.
SUMMARY OF THE DISCLOSURE
[0007] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0008] An aspect of the present disclosure may relate to a method for automatically diverting a network traffic to a time efficient path. The method comprises fetching, by a fetching unit, at a Service Communication Proxy performance automated intelligence (SCP-pAI) engine, a set of statistics data associated with each path from a set of traffic route paths between a first node and a target node. The method further comprises identifying, by an identification unit, at the SCP-pAI engine, one or more time efficient paths between the first node and the target node based on the set of statistics data. The method further comprises determining, by a determination unit, at the SCP-pAI engine, a target time efficient paths from the one or more time efficient paths between the first node and the target node based on the set of statistics data; and automatically facilitating, routing at the SCP-pAI engine, by a routing unit, of the network traffic between the first node and the target node via the target time efficient path.
[0009] In an exemplary aspect of the present disclosure, the method further comprises identifying, by a Service Communication Proxy (SCP), the network traffic between the first node and the target node; identifying, by the SCP, a predetermined traffic route path based on the network traffic between the first node and the target node; identifying, by the SCP, the set of traffic route paths between the first node and the target node; and providing, by the SCP, to the SCP-pAI engine, the set of traffic route paths.
[0010] In an exemplary aspect of the present disclosure, in the method, the set of network statistics data comprises at least one of a network statistic associated with each path of the set of traffic route paths, a performance statistic associated with each path of the set of traffic route paths and a system statistics associated with each path of the set of traffic route paths.
[0011] In an exemplary aspect of the present disclosure, in the method, the SCP-pAI engine is an artificial-intelligence based engine trained based on a historical statistical data.
[0012] In an exemplary aspect of the present disclosure, in the method, the network statistics is at least one of a Round Trip Time (RTT) statistics associated with each path from the set of traffic route paths, an available bandwidth statistics associated with the each path from the set of traffic route paths, wherein the performance statistics is at least a current load statistics associated with the each path from the set of traffic route paths, and wherein the system statistics is at least one of a Random-Access Memory (RAM) statistics associated with the each path from the set of traffic route paths, a Central Processing Unit (CPU) statistics associated with the each path from the set of traffic route paths, and a storage utilisation statistics associated with the each path from the set of traffic route paths.
[0013] In an exemplary aspect of the present disclosure, in the method, identifying by the identification unit at the SCP-pAI engine the one or more time efficient paths between the first node and the target node further comprises generating, by the identification unit, a sorted set of traffic route paths based on sorting the set of traffic route paths between the first node and the target node in a predefined order, wherein the predefined order is based on the network statistics associated with the each path from the set of traffic route paths; determining, by the identification unit, at least one of the current load statistics associated with the each path from the sorted set of traffic route paths, a maximum supported traffic load associated with the each path from the sorted set of traffic route paths and a traffic requirement of at least one of the first node and the target node, and identifying, by the identification unit, the one or more time efficient paths between the first node and the target node based on at least the maximum supported traffic load associated with the each path from the sorted set of traffic route paths.
[0014] In an exemplary aspect of the present disclosure, the method further comprises computing a latency associated with each path from the set of traffic route paths, and wherein the target time efficient path from the one or more time efficient paths between the first node and the target node is determined by the determination unit based on the latency associated with each path from the set of traffic route paths.
[0015] In an exemplary aspect of the present disclosure, in the method, the automatically facilitating routing of the network traffic between the first node and the target node via the target time efficient path
is further based on initiating, by the SCP-pAI engine, an update registration procedure to update one or more registrations details associated with the network traffic and the predetermined traffic route path between the first node and the target node.
[0016] In an exemplary aspect of the present disclosure, the method further comprises identifying, by the identification unit, at the SCP-pAI engine, a latency fluctuation associated with one or more traffic route paths from the set of traffic route paths; and generating, by an alert unit, at the SCP-pAI engine, one or more alerts based on the identified latency fluctuation.
[0017] In an exemplary aspect of the present disclosure, in the method, the update registration procedure comprises transmitting, by a transceiver unit, to the SCP, a trigger to update the one or more registrations details; and re-registering, to a controller, the one or more registrations details based on the trigger.
[0018] In an exemplary aspect of the present disclosure, in the method, the update registration
procedure further comprises sending, by the controller, to a Network repository Function (NRF), an
update Registration request; and sending, by the controller, a broadcast message comprising the one or more registrations details to all SCPs.
[0019] In an exemplary aspect of the present disclosure, in the method, the update registration procedure further comprises sending by the NRF to at least one of the first node and the target node, the one or more registrations details.
[0020] Another aspect of the present disclosure may relate to a system for automatically diverting a network traffic to a time efficient path. The system comprises a Service Communication Proxy performance automated intelligence (SCP-pAI) engine. The SCP-pAI engine further comprises a fetching unit which is configured to fetch a set of network statistics data associated with each path from a set of traffic route paths between a first node and a target node. The SCP-pAI engine further comprises an identification unit connected to at least the fetching unit. The identification unit is configured to identify one or more time efficient paths between the first node and the target node based on the set of network statistics data. The SCP-pAI further comprises a determination unit connected to at least the identification unit. The determination unit is configured to determine a target time efficient path from the one or more time efficient paths between the first node and the target node based on the set of network statistics data. And, the SCP-pAI further comprises a routing unit connected to at least the determination unit, and the routing unit is configured to automatically facilitate routing of the network traffic between the first node and the target node via the target time efficient path.
[0021] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for automatically diverting a network traffic to a time efficient path, the instructions include executable code which, when executed by a one or more units of a system having a Service Communication Proxy performance automated intelligence (SCP-pAI) engine, causes: a fetching unit to fetch a set of network statistics data associated with each path from a set of traffic route paths between a first node and a target node; an identification unit to identify one or more time efficient paths between the first node and the target node based on the set of network statistics data; a determination unit to determine a target time efficient path from the one or more time efficient paths between the first node and the target node based on the a set of network statistics data; and a routing unit to automatically facilitate routing of the network traffic between the first node and the target node via the target time efficient path.
OBJECTS OF THE DISCLOSURE
[0022] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0023] It is an object of the present disclosure to provide a system and a method for automatically diverting a network traffic to a time efficient path.
[0024] It is an object of the present disclosure to provide a solution fetching a network statistics associated with each of the set of traffic route path, wherein the one or more network statistics comprises at least a network statistics associated with said each of the set of traffic route path, a performance statistics associated with said each of the set of traffic route path and a system statistics associated with said each of the set of traffic route path, and identifying a time efficient path between the first node and the target node based on the one or more network statistics.
[0025] It is an object of the present disclosure to provide a solution determining a target time efficient path from the efficient path between the first node and the target node based on the one or more network statistics, and automatically diverting the network traffic between the first node and the target node to the target time efficient path based on the one or more network statistics.
[0026] It is an object of the present disclosure to provide a system and a method for route recommendation using a trained model in a cellular communication to make sure that communication network is stable and running smoothly.
[0027] It is an object of the present disclosure to provide a solution that provides a cost-effective route recommendation in a cellular communication.
[0028] It is an object of the present disclosure to provide a solution to a time-saving route recommendation in a cellular communication.
[0029] It is an object of the present disclosure, to provide, using AI, Real Time recommendation of time efficient path from NF consumer to NF Producer involving various SCP Proxies to improve overall latency of 5G core Network.
[0030] It is an object of the present disclosure, to provide near Real Time recommendation given by SCP-pAI based on latency, capacity, Bandwidth, System resources and current utilization.
[0031] It is an object of the invention to provide re-mapping of Edge sites to SCP site may be required for time efficient routing.
DESCRIPTION OF THE DRAWINGS
[0032] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
[0033] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture.
[0034] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
[0035] Fig. 3 illustrates an exemplary block diagram of a system for automatically diverting a network traffic to a time efficient path, in accordance with exemplary implementations of the present disclosure.
[0036] Fig. 4 illustrates a method flow diagram for automatically diverting a network traffic to a time efficient path, in accordance with exemplary implementations of the present disclosure.
[0037] Fig. 5 illustrates a non-limiting exemplary scenario block diagram of a system [500] for a trained model for route recommendation in a cellular communication, in accordance with exemplary embodiments of the present disclosure.
[0038] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0039] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
[0040] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0041] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[0042] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A
process is terminated when its operations are completed but could have additional steps not included in a figure.
[0043] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
[0044] As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
[0045] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
[0046] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random
access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
[0047] As used herein “interface” or “user interface refers to a shared boundary across which two or more separate components of a system exchange information or data. The interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
[0048] All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[0049] As used herein the transceiver unit includes at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
[0050] As discussed in the background section, the current known solutions have several shortcomings. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing methods and systems for automatically diverting a network traffic to a time efficient path.
[0051] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture, in accordance with exemplary implementation of the present disclosure. As shown in FIG. 1, the 5GC network architecture [100] includes a user equipment (UE) [102], a radio access network (RAN) [104], an access and mobility management function (AMF) [106], a Session Management Function (SMF) [108], a Service Communication Proxy (SCP) [110], an Authentication Server Function (AUSF) [112], a Network Slice Specific Authentication and Authorization Function (NSSAAF) [114], a Network Slice Selection Function (NSSF) [116], a Network Exposure Function (NEF) [118], a Network Repository Function (NRF) [120], a Policy Control Function (PCF) [122], a Unified Data Management (UDM) [124], an application function (AF) [126], a User Plane Function (UPF) [128], a data network (DN) [130], wherein all the components are assumed to be connected to
5 each other in a manner as obvious to the person skilled in the art for implementing features of the present
disclosure.
[0052] Radio Access Network (RAN) [104] is the part of a mobile telecommunications system that
connects user equipment (UE) [102] to the core network (CN) and provides access to different types of
10 networks (e.g., 5G network). It consists of radio base stations and the radio access technologies that
enable wireless communication.
[0053] Access and Mobility Management Function (AMF) [106] is a 5G core network function
responsible for managing access and mobility aspects, such as UE registration, connection, and
15 reachability. It also handles mobility management procedures like handovers and paging.
[0054] Session Management Function (SMF) [108] is a 5G core network function responsible for
managing session-related aspects, such as establishing, modifying, and releasing sessions. It coordinates
with the User Plane Function (UPF) for data forwarding and handles IP address allocation and QoS
20 enforcement.
[0055] Service Communication Proxy (SCP) [110] is a network function in the 5G core network that
facilitates delegated discovery, message forwarding and routing to destination NF/NF service, message
forwarding and routing to a next SCP, communication security (such as authorization of the NF Service
25 Consumer to access the NF Service Producer API, load balancing, monitoring, overload control, etc.)
between Network Function (NF) services.
[0056] Authentication Server Function (AUSF) [112] is a network function in the 5G core responsible
for authenticating UEs during registration and providing security services. It generates and verifies
30 authentication vectors and tokens.
[0057] Network Slice Specific Authentication and Authorization Function (NSSAAF) [114] is a network function that provides authentication and authorization services specific to network slices. It ensures that UEs can access only the slices for which they are authorized. 35
[0058] Network Slice Selection Function (NSSF) [116] is a network function responsible for selecting the appropriate network slice for a UE based on factors such as subscription, requested services, and network policies.
11
5 [0059] Network Exposure Function (NEF) [118] is a network function that exposes capabilities and
services of the 5G network to external applications, enabling integration with third-party services and applications.
[0060] Network Repository Function (NRF) [120] is a network function that acts as a central repository for information about available network functions and services. It facilitates the support discovery and dynamic registration of network functions. The NRF [120] receives discovery request from the SCP[110], and provides the information of the discovered NF instances (be discovered) to the SCP [110]; maintains the NF profile of available NF instances and their supported services; maintains the SCP profile of available SCP [110] instances; supports SCP [110] discovery by the SCP [110] instances; notifies about newly registered/updated/ deregistered SCP [110] instances along with its potential NF services to the subscribed SCP [110]; maintains the health status of the SCP [110]. Here, the SCP [110] profile may include but not limited to SCP ID; indication of the profile of the SCP [110]; a SCP [110] capacity information; the SCP [110] load information; the SCP [110] priority; the NF/ sets of NFs served by the SCP [110] etc.
[0061] Policy Control Function (PCF) [122] is a network function responsible for policy control decisions, such as QoS, charging, and access control, based on subscriber information and network policies.
25 [0062] Unified Data Management (UDM) [124] is a network function that centralizes the management
of subscriber data, including authentication, authorization, and subscription information.
[0063] Application Function (AF) [126] is a network function that represents external applications interfacing with the 5G core network to access network capabilities and services. 30
[0064] User Plane Function (UPF) [128] is a network function responsible for handling user data traffic, including packet routing, forwarding, and QoS enforcement.
[0065] Data Network (DN) [130] refers to a network that provides data services to user equipment
35 (UE) in a telecommunications system. The data services may include but are not limited to Internet
services, private data network related services.
[0066] Fig. 2 illustrates an exemplary block diagram of a computing device [1000] upon which the
features of the present disclosure may be implemented in accordance with exemplary implementation
40 of the present disclosure. In an implementation, the computing device [1000] may also implement a
method for automatically diverting a network traffic to a time efficient path utilising the system. In
12
5 another implementation, the computing device [1000] itself implements the method for automatically
diverting a network traffic to a time efficient path using one or more units configured within the computing device [1000], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
10 [0067] The computing device [1000] may include a bus [1002] or other communication mechanism
for communicating information, and a hardware processor [1004] coupled with bus [1002] for processing information. The hardware processor [1004] may be, for example, a general-purpose microprocessor. The computing device [1000] may also include a main memory [1006], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [1002] for storing
15 information and instructions to be executed by the processor [1004]. The main memory [1006] also may
be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [1004]. Such instructions, when stored in non-transitory storage media accessible to the processor [1004], render the computing device [1000] into a special-purpose machine that is customized to perform the operations specified in the instructions. The
20 computing device [1000] further includes a read only memory (ROM) [1008] or other static storage
device coupled to the bus [1002] for storing static information and instructions for the processor [1004].
[0068] A storage device [1010], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [1002] for storing information and instructions. The computing device [1000]
25 may be coupled via the bus [1002] to a display [1012], such as a cathode ray tube (CRT), Liquid crystal
Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input device [1014], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [1002] for communicating information and command selections to the processor [1004]. Another type of user input device may be a cursor controller [1016],
30 such as a mouse, a trackball, or cursor direction keys, for communicating direction information and
command selections to the processor [1004], and for controlling cursor movement on the display [1012]. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
35 [0069] The computing device [1000] may implement the techniques described herein using customized
hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [1000] causes or programs the computing device [1000] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the computing device [1000] in response to the processor [1004] executing one or more sequences of one
40 or more instructions contained in the main memory [1006]. Such instructions may be read into the main
memory [1006] from another storage medium, such as the storage device [1010]. Execution of the
13
5 sequences of instructions contained in the main memory [1006] causes the processor [1004] to perform
the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
[0070] The computing device [1000] also may include a communication interface [1018] coupled to
10 the bus [1002]. The communication interface [1018] provides a two-way data communication coupling
to a network link [1020] that is connected to a local network [1022]. For example, the communication
interface [1018] may be an integrated services digital network (ISDN) card, cable modem, satellite
modem, or a modem to provide a data communication connection to a corresponding type of telephone
line. As another example, the communication interface [1018] may be a local area network (LAN) card
15 to provide a data communication connection to a compatible LAN. Wireless links may also be
implemented. In any such implementation, the communication interface [1018] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
20 [0071] The computing device [1000] can send messages and receive data, including program code,
through the network(s), the network link [1020] and the communication interface [1018]. In the Internet example, a server [1030] might transmit a requested code for an application program through the Internet [1028], the ISP [1026], the host [1024], the local network [1022] and the communication interface [1018]. The received code may be executed by the processor [1004] as it is received, and/or
25 stored in the storage device [1010], or other non-volatile storage for later execution.
[0072] Referring to Figure 3, an exemplary block diagram of a system [300] for automatically diverting a network traffic to a time efficient path, is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises at least one service communication proxy
30 performance automated intelligence (SCP-pAI) engine [301] comprising at least one fetching unit
[302], at least one identification unit [303], at least one determination unit [304], and at least one routing unit [305]. The system [300] also comprises at least one service communication proxy (SCP) [306], at least one alert unit [307], at least one transceiver unit [308], at least one service communication proxy (SCP) controller [309], and network repository function (NRF) [310]. The system [300] is responsible
35 for automatically diverting a network traffic to a time efficient path in order to establish the connection
between a first node [300f] and a target node [300t]. Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in Fig. 1 only a few units are shown, however, the system [300] may comprise multiple such units or
40 the system [300] may comprise any such numbers of said units, as required to implement the features
of the present disclosure. Further, in an implementation, in the system [300], the at least one SCP [306]
14
5 may be single/multiple connected to at least one SCP-pAI engine [301] to implement the features of the
present disclosure. The system [300] may be a part of the user device/or may be independent of but in
communication with the user device (may also referred herein as a UE). In another implementation, the
system [300] may reside in a server or a network entity. In yet another implementation, the SCP-pAI
engine [301], NRF (as shown in Fig. 1), the SCP [306], the SCP controller [309] are separate entities,
10 at different sites, connected with each other via one or more networks. In yet another implementation,
the system [300] may reside partly in the server/ network entity and partly in the user device.
[0073] The system [300] is configured for automatically diverting a network traffic to a time efficient path, with the help of the interconnection between the components/units of the system [300].
15
[0074] The SCP-pAI engine [301] is an artificial-intelligence based engine trained based on a historical statistical data. The SCP-pAI engine [301] is in communication with the SCP [306], and the SCP [306] is configured to: identify the network traffic between the first node [300f] and the target node [300t]; identify a predetermined traffic route path based on the network traffic between the first node [300f]
20 and the target node [300t]; identify the set of traffic route paths between the first node [300f] and the
target node [300t]; and provide to the SCP-pAI engine [301], the set of traffic route paths.
[0075] The fetching unit [302] is configured to fetch a set of network statistics data associated with each path from a set of traffic route paths between the first node [300f] and the target node [300t]. It is
25 to be noted that the set of network statistics data comprises at least one of a network statistics associated
with each path of the set of traffic route paths, a performance statistics associated with each path of the set of traffic route paths and a system statistics associated with each path of the set of traffic route paths. It is to be noted that the network statistics is at least one of a round trip time (RTT) statistics associated with each path from the set of traffic route paths, an available bandwidth statistics associated with the
30 each path from the set of traffic route paths; the performance statistics is at least a current load statistics
associated with the each path from the set of traffic route paths; and the system statistics is at least one of a Random-access memory (RAM) statistics associated with the each path from the set of traffic route paths, a Central processing unit (CPU) statistics associated with the each path from the set of traffic route paths, and a storage utilisation statistics associated with the each path from the set of traffic route
35 paths.
[0076] The identification unit [303] is connected to the fetching unit [302]. The identification unit
[303] is configured to identify one or more time efficient paths between the first node [300f] and the
target node [300t] based on the set of network statistics data. In order to identify the one or more time
40 efficient paths between the first node [300f] and the target node [300t], the identification unit [303] is
further configured to generate a sorted set of traffic route paths based on sorting the set of traffic route
15
5 paths between the first node [300f] and the target node [300t] in a predefined order. It is further noted
that the predefined order is based on the one or more network statistics associated with the each path from the set of traffic route paths. The identification unit [303] is further configured to determine at least one of the current load statistics associated with the each path from the sorted set of traffic route paths, a maximum supported traffic load associated with the each path from the sorted set of traffic
10 route paths and a traffic requirement of at least one of the first node [300f] and the target node [300t].
And, the identification unit [303] is further configured to identify the one or more time efficient paths between the first node [300f] and the target node [300t] based on at least the maximum supported traffic load associated with the each path from the set of sorted set of traffic route paths. The identification unit [303] is further configured to identify a latency fluctuation associated with one or more traffic route
15 paths from the set of traffic route paths. The alert unit [307] is configured to generate one or more alerts
based on the identified latency fluctuation.
[0077] The determination unit [304] is connected to the identification unit [303], and the determination unit [304] is configured to determine a target time efficient path from the one or more time- efficient
20 paths between the first node [300f] and the target node [300t] based on the set of network statistics data.
The determination unit [304] is further configured to compute a latency associated with each path from the set of traffic route paths. The target time efficient path from the one or more time-efficient paths between the first node [300f] and the target node [300t] is determined by the determination unit [304] based on the latency associated with each path from the set of traffic route paths.
25
[0078] The routing unit [305] is connected to the determination unit [304], and the routing unit [305] is configured to automatically facilitate routing of the network traffic between the first node [300f] and the target node [300t] via the target time efficient path. It is important to note that the routing unit [305] is configured to automatically facilitate routing based on initiating by the SCP-pAI engine [301], an
30 update registration procedure to update one or more registrations details associated with the network
traffic and the predetermined traffic route path between the first node [300f] and the target node [300t]. In order to perform the update registration procedure, the transceiver unit [308] of the system [300] is configured to transmit to the SCP engine [306], a trigger to update the one or more registrations details; and re-register, to the controller [309], the one or more registrations details based on the trigger. Further,
35 in order to perform the update registration procedure, the controller [309] is configured to: send to a
Network repository Function (NRF) [310], an update registration request, and send a broadcast message comprising the one or more registrations details to all SCPs [306]. Also, to perform the update registration procedure, the NRF [310] is configured to send to at least one of the first node [300f] and the target node [300t], the one or more registrations details.
40
16
5 [0079] Referring to Figure 4, an exemplary method flow diagram [400] for automatically diverting a
network traffic to a time efficient path, in accordance with exemplary implementations of the present disclosure is shown. In an implementation the method [400] is performed by the system [300]. Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in Figure 4, the method [400] starts at step [402].
10
[0080] At step [404], the method [400] comprises fetching, by a fetching unit [302], at a Service Communication Proxy performance automated intelligence (SCP-pAI) engine [301], a set of statistics data associated with each path from a set of traffic route paths between a first node [300f] and a target node [300t]. It is to be noted that the SCP-pAI engine [301] is an artificial-intelligence based engine
15 trained based on a historical statistical data.
[0081] At step [406], the method [400] comprises identifying, by an identification unit [303], at the SCP-pAI engine [301], one or more time-efficient paths between the first node [300f] and the target node [300t] based on the set of network statistics data. It is to be noted that the set of network statistics
20 data comprises at least one of a network statistic associated with each path of the set of traffic route
paths, a performance statistic associated with each path of the set of traffic route paths and a system statistics associated with each path of the set of traffic route paths. It is to be noted that the network statistics is at least one of a round trip time (RTT) statistics associated with each path from the set of traffic route paths, an available bandwidth statistics associated with the each path from the set of traffic
25 route paths, wherein the performance statistics is at least a current load statistics associated with the
each path from the set of traffic route paths, and wherein the system statistics is at least one of a Random-access memory (RAM) statistics associated with the each path from the set of traffic route paths, a Central processing unit (CPU) statistics associated with the each path from the set of traffic route paths, and a storage utilisation statistics associated with the each path from the set of traffic route
30 paths. It is further noted that for identifying by the identification unit [303] at the SCP-pAI engine [301],
the one or more time efficient paths between the first node [300f] and the target node [300t], the method further comprises step of generating, by the identification unit [303], a sorted set of traffic route paths based on sorting the set of traffic route paths between the first node [300f] and the target node [300t] in a predefined order, wherein the predefined order is based on the network statistics associated with the
35 each path from the set of traffic route paths; step of determining, by the identification unit [303], at least
one of the current load statistics associated with the each path from the sorted set of traffic route paths, a maximum supported traffic load associated with the each path from the sorted set of traffic route paths and a traffic requirement of at least one of the first node [300f] and the target node [300t]; and step of identifying, by the identification unit [303], the one or more time efficient paths between the first node
40 [300f] and the target node [300t] based on at least the maximum supported traffic load associated with
the each path from the sorted set of traffic route paths.
17
5
[0082] At step [408], the method [400] comprises determining, by a determination unit [304], at the SCP-pAI engine [301], a target time efficient paths from the one or more time-efficient paths between the first node [300f] and the target node [300t] based on the set of statistics data.
10 [0083] At step [410], the method [400] comprises automatically facilitating, routing at the SCP-pAI
engine [301], by a routing unit [305], of the network traffic between the first node [300f] and the target node [300t] via the target time efficient path. It is to be noted that the automatically facilitating routing of the network traffic between the first node [300f] and the target node [300t] via the target time efficient path is further based on initiating, by the SCP-pAI engine [301], an update registration procedure to
15 update one or more registrations details associated with the network traffic and the predetermined traffic
route path between the first node [300f] and the target node [300t]. It is to be noted that the update registration procedure comprises transmitting, by a transceiver unit [308], to the SCP [306], a trigger to update the one or more registrations details; and re-registering, to a controller [309], the one or more registrations details based on the trigger. It is further noted that the update registration procedure further
20 comprises sending, by the controller [309], to a Network repository Function (NRF) [310], an update
Registration request; and sending, by the controller [309], a broadcast message comprising the one or more registrations details to all SCPs [306]. It is further important to note that the update registration procedure further comprises sending by the NRF [310] to at least one of the first node [300f] and the target node [300t], the one or more registrations details.
25
[0084] The method further comprises the step of identifying, by the service communication proxy (SCP) [306], the network traffic between the first node [300f] and the target node [300t]. The method further comprises the step of identifying, by the SCP [306], a predetermined traffic route path based on the network traffic between the first node [300f] and the target node [300t]. The method further
30 comprises the step of identifying, by the SCP [306], the set of traffic route paths between the first node
[300f] and the target node [300t]. And, the method further comprises the step of providing, by the SCP [306], to the SCP-pAI engine [301], the set of traffic route paths.
[0085] The method further comprises computing a latency associated with each path from the set of
35 traffic route paths, and wherein the target time efficient path from the one or more time-efficient paths
between the first node [300f] and the target node [300t] is determined by the determination unit [304] based on the latency associated with each path from the set of traffic route paths.
[0086] The method further comprises the step of identifying, by the identification unit [303], at the
40 SCP-pAI engine [301], a latency fluctuation associated with one or more traffic route paths from the
18
5 set of traffic route paths; and generating, by an alert unit [307], at the SCP-pAI engine [301], one or
more alerts based on the identified latency fluctuation.
[0087] Thereafter, the method terminates at step [412].
10 [0088] The present disclosure further discloses a non-transitory computer readable storage medium
storing instructions for automatically diverting a network traffic to a time efficient path, the instructions include executable code which, when executed by a one or more units of a system [300] having a Service Communication Proxy performance automated intelligence (SCP-pAI) engine [301], causes: a fetching unit [302] to fetch a set of network statistics data associated with each path from a set of traffic route
15 paths between a first node [300f] and a target node [300t]; an identification unit [303] to identify one
or more time efficient paths between the first node [300f] and the target node [300t] based on the set of network statistics data; a determination unit [304] to determine a target time efficient path from the one or more time efficient paths between the first node [300f] and the target node [300t] based on the a set of network statistics data; and a routing unit [305] to automatically facilitate routing of the network
20 traffic between the first node [300f] and the target node [300t] via the target time efficient path.
[0089] Now, referring to Figure 5, a non-limiting exemplary scenario block diagram of a system [500] for a trained model for route recommendation in a cellular communication is shown, in accordance with the exemplary embodiments of the present invention. The system [500] comprises at least one consumer
25 NF/ NF consumer [510], a plurality of SCP egress / SCPEgress [520], a plurality of SCPIngress [530];
and one or more producer NF/ NF producer [540]. In an embodiment, the at least one consumer NF/ NF consumer, plurality of SCP egress / SCPEgress [520], a plurality of SCPIngress [530]; and the one or more producer NF/ NF producer [540] are located at one or more locations, each of the aforementioned units can be at one single location or spread across various locations. The plurality of
30 SCPEgress [520] intercepts outgoing messages and encrypts outgoing messages before sending them
to the plurality of SCPIngress [530]. The plurality of SCPIngress [530] intercepts incoming messages and decrypts incoming messages before sending them to the one or more producer NF/ NF producers [540]. Also, all of the components/ units of the system [500] are assumed to be connected to each other unless otherwise indicated below. Also, in Fig. 5 only a few units are shown, however, the system [500]
35 may comprise multiple such units or the system [500] may comprise any such numbers of said units, as
required to implement the features of the present disclosure. In an implementation, the system [500] may reside in a server or a network entity.
[0090] In order to reroute the traffic data and alert a Network Management System (NMS), the at least
40 one NF consumer [510] and the plurality of NF producer [540] of the system [500] are configured to
receive data from one or more sources. The one or more sources may include client-based network
19
5 functions, local servers, cloud-based servers and the like. In an embodiment, traffic-based data, signal
data, request data, user data may be received from the Network Functions at the client level. In an embodiment, the data associated with pattern of traffic, historical data, occurrence of events that impact the traffic at the network and the like may be received from the network functions, local servers and the cloud-based servers. 10
[0091] Next, the SCP-pAI engine [301] (as shown in Fig. 3), in the system [500] is configured to analyse the received data, using Machine Learning based techniques, to identify and recommend a best possible route to reroute the traffic load at the Network level.
15 [0092] The system [500] is configured for providing a trained model for intelligent route
recommendation in a cellular communication, with the help of the interconnection between the components/units of the system [500]. During a cellular connection, when the at least one NF consumer [510] tries to communicate with the one or more NF producer [540] via the plurality of SCPEgress [50] and the plurality of SCPIngress [530]. The system [500] via the SCP-pAI engine [301] analyses various
20 factors such as latency factor, load factor, bandwidth factor, etc. between the plurality of SCPEgress
[50] and the plurality of SCPIngress [530] to analyze a best possible route for the system [500] to distribute the traffic load intelligently. Once the best possible route is derived, the system [500] generates an alert to reroute the traffic load from the at least NF consumer [510] to the one or more NF producers [540]. The decision to reroute of traffic load can be taken manually by the designated person
25 such a network administrator or automatically by the trained model. For e.g., the SCP proxies i.e.,
SCPEgress [520] and the plurality of SCPIngress [530], at regular interval determine the current network, performance & system statistics. The network statistics/ factors may include RTT (Round trip time), available bandwidth, performance statistics may include current load (TPS), system statistics may include RAM, CPU, storage utilizations. The SCP-pAI engine [301] fetches the data from these
30 SCP proxies at regular interval. The interval depends upon operator policy. The SCP-pAI engine [301]
stores the data in a storage/ memory unit (not shown). Based on AI, the SCP-pAI engine [301] involves the various steps of computing various possible network path(s) and latency associated with each of the network path; sorting the paths in order, for e.g., say [P1, P2, P3, P4] and each path will have its maximum supported load say [L1, L2, L3, L4] (not shown); based on load capacity supported by each
35 path and traffic requirement of each NF type, it will route the network traffic. Also, the highest network
traffic will be assigned least latency path P1. Then the second highest network traffic will be assigned to P1 if possible, otherwise to P2. Similarly, others will be calculated. The SCP-pAI engine [301] then compares computed time efficient path with current path in case of mismatch. Based on operator Policy, the SCP-pAI engine [301] automatically diverts traffic to time efficient path by triggering SCP proxies
40 to update its registration details. The SCP-pAI engine [301], based on traffic pattern, triggers the SCP
proxies to update its registration data. The required inputs may include supported NF types, supported
20
5 public land mobile network (PLMN), supported slice, locality. The SCP proxies on receiving trigger
from the SCP-pAI engine [301], re-registers with updated data to a controller [309] (as shown in Fig. 3). The controller [309] on receiving updated registration from the SCP proxies, sends updated registration data to NRF [120] (as shown in Fig. 1) and broadcast the updated registration data to all SCP proxies. The NRF [120] can then notify subscriber NFs of the changes.
10
[0093] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various the components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure.
15 The functionality of specific units as disclosed in the disclosure should not be construed as limiting the
scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
20 [0094] As is evident from the above, the present disclosure provides a technically advanced solution
for automatically diverting a network traffic to a time efficient path and for providing the trained model for intelligent route recommendation in a cellular communication. Thus, the present solution provides: a stable and smoothly running network connection; an intelligent and trained model for stable and smoothly running route for establishing connection for communication; improved connectivity;
25 improved bandwidth allocation; cost-effective solution for rerouting of the traffic load; time-efficient
solution for rerouting of the traffic load.
[0095] While considerable emphasis has been placed herein on the disclosed implementations, it will
be appreciated that many implementations can be made and that many changes can be made to the
30 implementations without departing from the principles of the present disclosure. These and other
changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
21
We Claim:
1. A method [400] for automatically diverting a network traffic to a time efficient path, the method
[400] comprising:
- fetching, by a fetching unit [302], at a Service Communication Proxy performance automated intelligence (SCP-pAI) engine [301], a set of statistics data associated with each path from a set of traffic route paths between a first node [300f] and a target node [300t];
- identifying, by an identification unit [303], at the SCP-pAI engine [301], one or more time-efficient paths between the first node [300f] and the target node [300t] based on the set of network statistics data;
- determining, by a determination unit [304], at the SCP-pAI engine [301], a target time efficient paths from the one or more time-efficient paths between the first node [300f] and the target node [300t] based on the set of statistics data; and
- automatically facilitating, routing at the SCP-pAI engine [301], by a routing unit [305], of the network traffic between the first node [300f] and the target node [300t] via the target time efficient path.
2. The method [400] as claimed in claim 1, further comprising:
- identifying, by a Service Communication Proxy (SCP) [306], the network traffic between the first node [300f] and the target node [300t];
- identifying, by the SCP [306], a predetermined traffic route path based on the network traffic between the first node [300f] and the target node [300t];
- identifying, by the SCP [306], the set of traffic route paths between the first node [300f] and the target node [300t]; and
- providing, by the SCP [306], to the SCP-pAI engine [301], the set of traffic route paths.
3. The method [400] as claimed in claim 1, wherein the set of network statistics data comprises at least one of a network statistics associated with each path of the set of traffic route paths, a performance statistics associated with each path of the set of traffic route paths and a system statistics associated with each path of the set of traffic route paths.
4. The method [400] as claimed in claim 1, wherein the SCP-pAI engine [301] is an artificial-intelligence based engine trained based on a historical statistical data.
5. The method [400] as claimed in claim 3, wherein the network statistics is at least one of a Round Trip Time (RTT) statistics associated with each path from the set of traffic route paths, an
available bandwidth statistics associated with the each path from the set of traffic route paths, wherein the performance statistics is at least a current load statistics associated with the each path from the set of traffic route paths, and wherein the system statistics is at least one of a Random-Access Memory (RAM) statistics associated with the each path from the set of traffic route paths, a Central Processing Unit (CPU) statistics associated with the each path from the set of traffic route paths, and a storage utilisation statistics associated with the each path from the set of traffic route paths.
6. The method [400] as claimed in claim 5, wherein identifying by the identification unit [303] at
the SCP-pAI engine [301] the one or more time-efficient paths between the first node [300f]
and the target node [300t] further comprises:
- generating, by the identification unit [303], a sorted set of traffic route paths based on sorting the set of traffic route paths between the first node [300f] and the target node [300t] in a predefined order, wherein the predefined order is based on the network statistics associated with the each path from the set of traffic route paths;
- determining, by the identification unit [303], at least one of the current load statistics associated with the each path from the sorted set of traffic route paths, a maximum supported traffic load associated with the each path from the sorted set of traffic route paths and a traffic requirement of at least one of the first node [300f] and the target node [300t]; and
- identifying, by the identification unit [303], the one or more time-efficient paths between the first node [300f] and the target node [300t] based on at least the maximum supported traffic load associated with the each path from the sorted set of traffic route paths.
7. The method [400] as claimed in claim 1, the method [400] further comprises computing a latency associated with each path from the set of traffic route paths, and wherein the target time efficient path from the one or more time efficient paths between the first node [300f] and the target node [300t] is determined by the determination unit [304] based on the latency associated with each path from the set of traffic route paths.
8. The method [400] as claimed in claim 1, wherein the automatically facilitating routing of the network traffic between the first node [300f] and the target node [300t] via the target time efficient path is further based on initiating, by the SCP-pAI engine [301], an update registration procedure to update one or more registrations details associated with the network traffic and a predetermined traffic route path between the first node [300f] and the target node [300t].
9. The method [400] as claimed in claim 1, further comprising:
- identifying, by the identification unit [303], at the SCP-pAI engine [301], a latency fluctuation associated with one or more traffic route paths from the set of traffic route paths; and
- generating, by an alert unit [307], at the SCP-pAI engine [301], one or more alerts based on the identified latency fluctuation.
10. The method [400] as claimed in claim 8, wherein the update registration procedure comprises:
- transmitting, by a transceiver unit [308], to the SCP [306], a trigger to update the one or more registrations details; and
- re-registering, to a controller [309], the one or more registrations details based on the trigger.
11. The method [400] as claimed in claim 10, wherein the update registration procedure further
comprises:
- sending, by the controller [309], to a Network Repository Function (NRF) [310], an update
registration request; and
- sending, by the controller [309], a broadcast message comprising the one or more
registrations details to all SCP [306].
12. The method [400] as claimed in claim 11, wherein the update registration procedure further
comprises:
- sending by the NRF [310] to at least one of the first node [300f] and the target node [300t],
the one or more registrations details.
13. A system [300] for automatically diverting a network traffic to a time efficient path, the system
[300] comprises:
- a Service Communication Proxy performance automated intelligence (SCP-pAI) engine
[301], the SCP-pAI engine [301] further comprising:
o a fetching unit [302] configured to:
• fetch a set of network statistics data associated with each path from a set of
traffic route paths between a first node [300f] and a target node [300t];
o an identification unit [303] connected to at least the fetching unit [302], the identification unit [303] configured to:
• identify one or more time-efficient paths between the first node [300f] and the
target node [300t] based on the set of network statistics data;
o a determination unit [304] connected to at least the identification unit [303], the
determination unit [304] configured to determine a target time efficient path from the one or more time-efficient paths between the first node [300f] and the target node [300t] based on the set of network statistics data; and
o a routing unit [305] connected to at least the determination unit [304], the routing unit [305] configured to automatically facilitate routing of the network traffic between the first node [300f] and the target node [300t] via the target time efficient path.
14. The system [300] as claimed in claim 13, wherein the SCP-pAI engine [301] is in
communication with a Service Communication Proxy (SCP) [306], wherein the SCP [306] is
configured to:
- identify the network traffic between the first node [300f] and the target node [300t];
- identify a predetermined traffic route path based on the network traffic between the first node [300f] and the target node [300t];
- identify the set of traffic route paths between the first node [300f] and the target node [300t]; and
- provide to the SCP-pAI engine [301], the set of traffic route paths.
15. The system [300] as claimed in claim 13, wherein the set of network statistics data comprises at least one of a network statistics associated with each path of the set of traffic route paths, a performance statistics associated with each path of the set of traffic route paths and a system statistics associated with each path of the set of traffic route paths.
16. The system [300] as claimed in claim 13, wherein the SCP-pAI engine [301] is an artificial-intelligence based engine trained based on a historical statistical data.
17. The system [300] as claimed in claim 15, wherein the network statistics is at least one of a round trip time statistics associated with each path from the set of traffic route paths, an available bandwidth statistics associated with the each path from the set of traffic route paths, wherein the performance statistics is at least a current load statistics associated with the each path from the set of traffic route paths, and wherein the system statistics is at least one of a Random-Access Memory (RAM) statistics associated with the each path from the set of traffic route paths, a Central Processing Unit (CPU) statistics associated with the each path from the set of traffic route paths, and a storage utilisation statistics associated with the each path from the set of traffic route paths.
18. The system [300] as claimed in claim 17, wherein to identify the one or more time-efficient
paths between the first node [300f] and the target node [300t], the identification unit [303] is
further configured to:
- generate a sorted set of traffic route paths based on sorting the set of traffic route paths between the first node [300f] and the target node [300t] in a predefined order, wherein the predefined order is based on the network statistics associated with the each path from the set of traffic route paths;
- determine at least one of the current load statistics associated with the each path from the sorted set of traffic route paths, a maximum supported traffic load associated with the each path from the sorted set of traffic route paths and a traffic requirement of at least one of the first node [300f] and the target node [300t]; and
- identify, the one or more time-efficient paths between the first node [300f] and the target node [300t] based on at least the maximum supported traffic load associated with the each path from the sorted set of traffic route paths.
19. The system [300] as claimed in claim 13, wherein the determination unit [304] is further configured to compute a latency associated with each path from the set of traffic route paths, and wherein the target time efficient path from the one or more time efficient paths between the first node [300f] and the target node [300t] is determined by the determination unit [304] based on the latency associated with each path from the set of traffic route paths.
20. The system [300] as claimed in claim 14, wherein the routing unit [305] is configured to automatically facilitate routing of the network traffic between the first node [300f] and the target node [300t] via the target time efficient path, further based on initiating by the SCP-pAI engine [301], an update registration procedure to update one or more registrations details associated with the network traffic and the predetermined traffic route path between the first node [300f] and the target node [300t].
21. The system [100] as claimed in claim 13, wherein:
- the identification unit [303] is further configured to identify a latency fluctuation associated with one or more traffic route paths from the set of traffic route paths; and
- an alert unit [307] is configured to generate one or more alerts based on the identified latency fluctuation.
22. The system [100] as claimed in claim 20, wherein to perform the update registration procedure,
the system [100] further comprising:
- a transceiver unit [308] configured to transmit to the SCP [306], a trigger to update the one or more registrations details; and
- re-register, to a controller [309], the one or more registrations details based on the trigger.
23. The system [300] as claimed in claim 22, wherein to perform the update registration procedure,
the controller [309] is configured to:
- send to a Network Repository Function (NRF) [310], an update registration request, and
- send a broadcast message comprising the one or more registrations details to all SCPs
[306].
24. The system [300] as claimed in claim 23, wherein to perform the update registration procedure:
- the NRF [310] is configured to send to at least one of the first node [300f] and the target
node [300t], the one or more registrations details.
| # | Name | Date |
|---|---|---|
| 1 | 202321045213-STATEMENT OF UNDERTAKING (FORM 3) [05-07-2023(online)].pdf | 2023-07-05 |
| 2 | 202321045213-PROVISIONAL SPECIFICATION [05-07-2023(online)].pdf | 2023-07-05 |
| 3 | 202321045213-FORM 1 [05-07-2023(online)].pdf | 2023-07-05 |
| 4 | 202321045213-FIGURE OF ABSTRACT [05-07-2023(online)].pdf | 2023-07-05 |
| 5 | 202321045213-DRAWINGS [05-07-2023(online)].pdf | 2023-07-05 |
| 6 | 202321045213-FORM-26 [12-09-2023(online)].pdf | 2023-09-12 |
| 7 | 202321045213-Proof of Right [03-10-2023(online)].pdf | 2023-10-03 |
| 8 | 202321045213-ORIGINAL UR 6(1A) FORM 1 & 26)-181023.pdf | 2023-11-06 |
| 9 | 202321045213-ENDORSEMENT BY INVENTORS [11-06-2024(online)].pdf | 2024-06-11 |
| 10 | 202321045213-DRAWING [11-06-2024(online)].pdf | 2024-06-11 |
| 11 | 202321045213-CORRESPONDENCE-OTHERS [11-06-2024(online)].pdf | 2024-06-11 |
| 12 | 202321045213-COMPLETE SPECIFICATION [11-06-2024(online)].pdf | 2024-06-11 |
| 13 | Abstract1.jpg | 2024-07-09 |
| 14 | 202321045213-FORM 3 [01-08-2024(online)].pdf | 2024-08-01 |
| 15 | 202321045213-Request Letter-Correspondence [09-08-2024(online)].pdf | 2024-08-09 |
| 16 | 202321045213-Power of Attorney [09-08-2024(online)].pdf | 2024-08-09 |
| 17 | 202321045213-Form 1 (Submitted on date of filing) [09-08-2024(online)].pdf | 2024-08-09 |
| 18 | 202321045213-Covering Letter [09-08-2024(online)].pdf | 2024-08-09 |
| 19 | 202321045213-CERTIFIED COPIES TRANSMISSION TO IB [09-08-2024(online)].pdf | 2024-08-09 |
| 20 | 202321045213-FORM-9 [11-11-2024(online)].pdf | 2024-11-11 |
| 21 | 202321045213-FORM 18A [11-11-2024(online)].pdf | 2024-11-11 |
| 22 | 202321045213-FER.pdf | 2024-12-12 |
| 23 | 202321045213-FER_SER_REPLY [31-01-2025(online)].pdf | 2025-01-31 |
| 24 | 202321045213-US(14)-HearingNotice-(HearingDate-24-04-2025).pdf | 2025-04-03 |
| 25 | 202321045213-Correspondence to notify the Controller [08-04-2025(online)].pdf | 2025-04-08 |
| 26 | 202321045213-FORM-26 [09-04-2025(online)].pdf | 2025-04-09 |
| 27 | 202321045213-Written submissions and relevant documents [07-05-2025(online)].pdf | 2025-05-07 |
| 28 | 202321045213-PatentCertificate27-05-2025.pdf | 2025-05-27 |
| 29 | 202321045213-IntimationOfGrant27-05-2025.pdf | 2025-05-27 |
| 1 | PCTIN2024050798-ssgy-000001-EN-20241018E_29-11-2024.pdf |