Abstract: ABSTRACT SYSTEM AND METHOD FOR UNIFORM DISTRIBUTION OF ONE OR MORE DATA PACKETS The present disclosure relates to a system (108) and a method (400) for uniform distribution of one or more data packets. The system (108) includes a receiving unit (210) to receive one or more data packets from one or more User Equipment (UE) (102). The system (108) includes a retrieving unit (212) to retrieve an Internet Protocol (IP) address of the one or more data packets from one of a source address of a PDU of an IP header and a destination address of the IP header. The system (108) includes a hashing unit (214) to perform hashing operation on the IP address of each of the one or more retrieved IP addresses. The system (108) includes a mapping unit (218) to map the hash value (406) with at least one core from the plurality of cores (410), and thereby distributing the one or more data packets across the plurality of data cores. Ref. Fig. 2
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR UNIFORM DISTRIBUTION OF ONE OR MORE DATA PACKETS
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to user plane packet processing, more particularly relates to a system and method for uniform distribution of data traffic using user plane packet processing on a multi-core computer processing unit.
BACKGROUND OF THE INVENTION
[0002] Exchange of information between end-user devices and network infrastructure is facilitated through the use of packets in current telecommunications networks. These packets contain data, such as voice, video, and other forms of digital information, which are transmitted over the network to reach their intended destinations.
[0003] Further to the current, network architectures are designed to handle different types of data traffic, such as control plane traffic and user plane traffic. The control plane is responsible for managing signaling and control functions, while the user plane is responsible for the transmission of user data packets.
[0004] The emergence of 5th Generation (5G) networks has increased the demand for high-speed, low-latency communication with respect to the user data packets. To achieve the high speed and the low latency, efficient handling of user data traffic is required. Thus, the need for significant changes to the network architecture arises.
[0005] The User Plane Function (UPF) technology is configured to handle the user data traffic within 5G and beyond network architectures. Hence the evolution of the UPF plays an important role, since UPF acts as a crucial component within the 5G core network, responsible for various tasks, including packet forwarding, traffic management, quality of service (QoS) enforcement, and network slicing. Further the User Plane Function (UPF) enables efficient data handling, supports diverse service requirements, and ensures a seamless user experience in highly dynamic and heterogeneous network environments.
[0006] Further the UPF in a service provider is configured to host multiple user/UE session and it applies different policies (rate limit, barring, quota, forwarding policies etc) on user packets flowing through the network. UPF in case of such service provider performs inter networking between the 5G network and the data network, and may have N3 interface defined towards gNodeB and N6 interface defined towards the data network. For UPF, running on a multi-core CPU/processor, to optimize the performance, multiple Rx queues are configured which are associated with the multiple CPU cores. The network interface card (NIC) receives the uplink/downlink traffic and if the default 5-tuple RSS based packet distribution is configured, which uses source IP, destination IP, source port, destination port, protocol in deciding that on which Rx queue the packet will land to, and based on that classifies/distributes the traffic to the Rx queues. This leads to packets from the same user/UE landing onto different Rx queues in downlink and all traffic from the same gNodeB landing onto the same Rx queue. That leads to a skewed load balancing and also impacts performance as it reduces cache localization and efficiency, reduced scalability across the CPU cores and moreover the control plane data plane synchronization requires locking all the data plane cores resulting in a sub-optimal implementation.
[0007] Hence there is a need in the art to further optimize the handling of requests and queries received by the UPF, for uniform distribution of data traffic. In particular, for uniform distribution of one or more data packets.
SUMMARY OF THE INVENTION
[0008] One or more embodiments of the present disclosure provide a system and a method for uniform distribution of one or more data packets.
[0009] In one aspect of the present invention, the system for uniform distribution of one or more data packets is disclosed. The system includes a receiving unit configured to receive one or more data packets from one or more User Equipments (UE). The system further includes a retrieving unit configured to retrieve, an Internet Protocol (IP) address of the one or more data packets from one of a source address of a Protocol Data Unit (PDU) of an IP header and a destination address of the IP header. The system further includes a hashing unit configured to perform, a hashing operation on the IP address of each of the one or more retrieved IP addresses to receive a hash value of the IP address of each of the one or more retrieved IP addresses. The system further includes a mapping unit configured to map, the hash value with at least one core from a plurality of cores, and thereby distributing the one or more data packets across the plurality of data cores.
[0010] In an embodiment, the one or more data packets is one of an uplink data packet and a downlink data packet. In an embodiment, the uplink data packet is received via a first interface and the downlink data packet is received via a second interface, and wherein the first interface is an N3 interface and the second interface is an N6 interface.
[0011] In an embodiment, the retrieving unit is configured to retrieve the IP address of the one or more data packets from the PDU of the IP header if the one or more data packets is the uplink data packet. In an embodiment, the retrieving unit is configured to retrieve the IP address of the one or more data packets from the destination address of the IP header if the one or more data packets is the downlink data packet.
[0012] In an embodiment, the system includes a generating unit configured to generate a query list based on the hash value. The query list includes a hash to an Rx queue table. The system further includes a transmitting unit configured to transmit queries from the query list to the at least one mapped core of the plurality of cores.
[0013] In an embodiment, the mapping unit is configured to map based on a redundancy check performed on processing history of a previously received data packets.
[0014] In another aspect of the present invention, the method of uniform distribution of one or more data packets is disclosed. The method includes the step of receiving one or more data packets from one or more User Equipments (UE). The method further includes the step of retrieving an Internet Protocol (IP) address of the one or more data packets from one of a source address of a Protocol Data Unit (PDU) of an IP header and a destination address of the IP header. The method further includes the step of performing a hashing operation on the IP address of each of the one or more retrieved IP addresses to receive a hash value of the IP address of each of the one or more retrieved IP addresses. The method further includes the step of mapping of the hash value with at least one core from a plurality of cores, and thereby distributing the one or more data packets across the plurality of data cores.
[0015] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive the one or more data packets from one or more User Equipments (UE). The processor is further configured to retrieve an Internet Protocol (IP) address of the one or more data packets from one of a source address of a Protocol Data Unit (PDU) of an IP header and a destination address of the IP header. The processor is further configured to perform hashing operation on the IP address of each of the one or more retrieved IP addresses to receive a hash value of the IP address of each of the one or more retrieved IP addresses. The processor is further configured to map the hash value with at least one core from the plurality of cores, and thereby distributing the one or more data packets across the plurality of data cores.
[0016] In another aspect of invention, the UE includes one or more primary processors communicatively coupled to one or more processors, the one or more primary processors coupled with a memory. The processor is configured to transmit one or more data packets to the one or more processers.
[0017] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0019] FIG. 1 is an exemplary block diagram of a communication system for uniform distribution of one or more data packets, according to one or more embodiments of the present invention;
[0020] FIG. 2 is an exemplary block diagram of a system for uniform distribution of one or more data packets, according to one or more embodiments of the present invention;
[0021] FIG. 3 is a schematic representation of a workflow of the communication system of FIG. 1, according to the one or more embodiments of the present invention;
[0022] FIG. 4 is an exemplary block diagram of an architecture of the system of the FIG. 2, according to one or more embodiments of the present invention;
[0023] FIG. 5 is a signal flow diagram for uniform distribution of one or more data packets, according to one or more embodiments of the present invention; and
[0024] FIG. 6 is a schematic representation of a method for uniform distribution of one or more data packets, according to one or more embodiments of the present invention.
[0025] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0026] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0027] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0028] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0029] As per various embodiments depicted, the present invention discloses a system and method for uniform distribution of data traffic such as one or more data packets.
[0030] FIG. 1 illustrates an exemplary block diagram of a communication system 100 for uniform distribution of one or more data packets, according to one or more embodiments of the present disclosure. In this regard, the communication system 100 includes a User Equipment (UE) 102, a server 104, a network 106 and a system 108 communicably coupled to each other for uniform distribution of one or more data packets. The UE 102 aids a user to interact with the system 108 for transmitting one or more data packets.
[0031] As per the illustrated embodiment and for the purpose of description and illustration, the UE 102 includes, but not limited to, a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 102 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 102a, the second UE 102b, and the third UE 102c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 102”.
[0032] In an embodiment, the UE 102 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0033] The communication system 100 includes the server 104 accessible via the network 106. The server 104 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0034] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0035] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 106 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0036] The communication system 100 further includes the system 108 communicably coupled to the server 104 and the UE 102 via the network 106. The system 108 is configured for uniform distribution of one or more data packets. As per one or more embodiments, the system 108 is adapted to be embedded within the server 104 or embedded as an individual entity.
[0037] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0038] FIG. 2 is an exemplary block diagram of the system 108 for uniform distribution of one or more data packets, according to one or more embodiments of the present invention.
[0039] As per the illustrated embodiment, the system 108 includes one or more processors 202, a memory 204, a user interface 206, and a database 208. For the purpose of description and explanation, the description will be explained with respect to one processor 202 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 108 may include more than one processors 202 as per the requirement of the network 106. The one or more processors 202, hereinafter referred to as the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0040] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0041] In an embodiment, the user interface 206 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 206 facilitates communication of the system 108. In one embodiment, the user interface 206 provides a communication pathway for one or more components of the system 108. Examples of such components include, but are not limited to, the UE 102 and the database 208.
[0042] The database 208 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 208 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0043] In order for the system 108 for uniform distribution of data traffic such as one or more data packets, the processor 202 includes one or more modules. In one embodiment, the one or more modules includes, but not limited to, a receiving unit 210, a retrieving unit 212, a hashing unit 214, a generating unit 216, a mapping unit 218 and a transmitting unit 220 communicably coupled to each other for uniform distribution of one or more data packets.
[0044] The receiving unit 210, the retrieving unit 212, the hashing unit 214, the generating unit 216, the mapping unit 218 and the transmitting unit 220 in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0045] In one embodiment, the receiving unit 210 of the system 108 is configured to receive one or more data packets from one or more UE 102. The one or more data packets is one of an uplink data packet and a downlink data packet. The uplink data packet refers to the data transmitted from the UE 102 to the network 106. The uplink data packet includes, but not limited to, video uploads, file uploads, text messages, sensor data from IoT devices, Voice over Internet Protocol (VoIP) calls, online gaming, web requests, GPS location updates. The downlink data packet refers to the data transmitted from the network 106 to the UE 102. The downlink data packet includes, but not limited to, web pages, streaming video, software updates, app downloads, emails and attachments, social media feeds, voice and video calls, real-time data feeds. The uplink data packet is received via a first interface and the downlink data packet is received via a second interface. The first interface is an N3 interface and the second interface is an N6 interface.
[0046] The N3 interface is a logical interface primarily responsible for carrying user plane data traffic between an Access and Mobility Management Function (AMF) and a User Plane Function (UPF). The AMF is responsible for managing mobility and session-related information for the UE 102, while the UPF handles the routing, forwarding, and management of user plane data packets. The N3 interface includes, but not limited to, user plane tunneling, Quality of Service (QoS) enforcement, charging and billing, and session management. The N6 interface enables communication and data exchange between the UPF and the AMF.
[0047] Upon receiving the one or more data packets, a retrieving unit 212 is configured to retrieve an Internet Protocol (IP) address of the one or more data packets. The IP address of the one or more data packets is retrieved from a source address of a Protocol Data Unit (PDU) of an IP header and a destination address of the IP header.
[0048] The IP address is a unique numerical label assigned to each UE 102 connected to the network 106 that uses the IP for communication. The source address of the PDU of the IP header is the IP address assigned to the UE 102 for initiating the communication. The source address identifies where the PDU originates in the network 106. The PDU refers to the unit of data that is exchanged between different protocol layers. The destination address of the IP header is the IP address of the intended recipient UE 102.
[0049] If the one or more data packets is the uplink data packet, the retrieving unit 212 is configured to retrieve the IP address of the one or more data packets from the PDU of the IP header. If the one or more data packets is the downlink data packet, the retrieving unit 212 is configured to retrieve the IP address of the one or more data packets from the destination address of the IP header.
[0050] Upon retrieving the IP address of the one or more data packets, the hashing unit 214 is configured to perform hashing operation on the IP address of each of the one or more retrieved IP addresses. The hashing unit 214 performs hashing operation on the IP address of each of the one or more retrieved IP addresses to receive a hash value 406 of the IP address of each of the one or more retrieved IP addresses. The hashing is a technique used to map data of arbitrary size to fixed-size values and is often used to efficiently distribute packets across multiple processing queues or paths.
[0051] Upon receiving the hash value 406 of the IP address of each of the one or more retrieved IP addresses, the generating unit 216 is configured to generate a query list based on the hash value 406. The query list includes a hash to an Rx queue table. The hash value 406 is a cryptographic representation generated by applying a hash function 404 to the IP address. The hash function 404 converts the IP address into a fixed-length string of characters, typically in hexadecimal format. The Rx queue is a receiving queue used to temporarily store one or more data packets until they can be processed. The hash to Rx queue table 408 is essentially a lookup table that maps the hash value 406 of certain packet attributes such as source/destination IP address, source/destination port, protocol type to specific Rx queues.
[0052] Upon generating the query list, the mapping unit 218 is configured to map the at least one core from the plurality of cores 410. By doing so, the mapping unit 218 distributes one or more data packets across the plurality of data cores. The at least one core refers to the CPU core (Central Processing Unit core). The CPU core refers to the individual processing units within a CPU (processor) chip. The CPU cores execute instructions and perform calculations for various tasks, including both control plane and data plane functions in the network 106. The CPU cores are found in the various network components such as routers, switches, or base stations. The CPU cores handle a variety of tasks, including packet forwarding, routing, management plane operations, control plane functions, and more.
[0053] The mapping unit 218 is configured to map the at least one core from the plurality of cores 410 based on a redundancy check. The redundancy check is performed on processing history of previously received data packets. The previously received data packets include the one or more data packets previously received from the UE 102. Upon mapping the at least one core from the plurality of cores 410, the transmitting unit 220 is configured to transmit the queries from the query list to the at least one mapped core of the plurality of cores 410. By doing so, the system 108 improves the processing capacity, performance and throughput of the processor 202 and able to process more data packets per second.
[0054] FIG. 3 describes a preferred embodiment of the system 108 of FIG. 2, according to various embodiments of the present invention. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 102a and the system 108 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0055] As mentioned earlier in FIG. 1, each of the first UE 102a the second UE 102b, and the third UE 102c may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor. The exemplary embodiment as illustrated in FIG. 3 will be explained with respect to the first UE 102a without deviating from the scope of the present disclosure and the limiting the scope of the present disclosure. The first UE 102a includes one or more primary processors 302 communicably coupled to the one or more processors 202 of the system 108.
[0056] The one or more primary processors 302 are coupled with a memory unit 304 storing instructions which are executed by the one or more primary processors 302. Execution of the stored instructions by the one or more primary processors 302 enables the first UE 102a to transmit the one or more data packets to the one or more processers 202.
[0057] As mentioned earlier in FIG. 2, the one or more processors 202 of the system 108 is configured for uniform distribution of one or more data packets. As per the illustrated embodiment, the system 108 includes the one or more processors 202, the memory 204, the user interface 206, and the database 208. The operations and functions of the one or more processors 202, the memory 204, the user interface 206, and the database 208 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0058] Further, the processor 202 includes the receiving unit 210, the retrieving unit 212, the hashing unit 214, the generating unit 216, the mapping unit 218 and the transmitting unit 220. The operations and functions of the receiving unit 210, the retrieving unit 212, the hashing unit 214, the generating unit 216, the mapping unit 218 and the transmitting unit 220 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description as provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0059] FIG. 4 is an exemplary block diagram of an architecture 400 of the system 108 for uniform distribution of one or more data packets, according to one or more embodiments of the present invention.
[0060] In an embodiment, the UE 102 is configured to transmit one or more data packets to a User Plane Function (UPF) 402. The UPF is responsible for performing several important functions, such as packet forwarding, quality of service (QoS) enforcement, and traffic filtering. The one or more data packets is one of the uplink data packet and the downlink data packet. The uplink data packet is received via the first interface and the downlink data packet is received via the second interface. The first interface is the N3 interface and the second interface is the N6 interface.
[0061] The UPF 402 includes the retrieving unit 212, hash function 404, hash value 406, hash to Rx queue table 408 and plurality of cores 410 communicably coupled to each other.
[0062] The retrieving unit 212 upon receiving the one or more data packets, the IP address for the uplink data packet is identified and retrieved from the PDU IP header. Similarly, the IP address for the downlink data packet is identified and retrieved from the destination address of IP header. Upon identifying and retrieving the IP address, hash function 404 performs hashing on the IP address retrieved for the uplink data packet and the downlink data packet. Upon performing hash function 404 on the retrieved IP address, a hash value 406 is obtained for the uplink data packet and the downlink data packet.
[0063] Upon obtaining the hash value 406, the query list is generated based on the hash value. The query list comprises hash to Rx queue table 408. Further, to create the query list, the hashing value of the uplink request and the downlink request are mapped with at least one core from the plurality of core 410. The mapping of the hashing value with the at least one core from the plurality of core 410 is based on a redundancy check. The redundancy check is performed by determining previous request received from the UE 102 and the at least one core from the plurality of core 410 that processed the previous request. In an embodiment, the memory 204 comprises L1 cache, L2 cache, and L3 cache to perform caching of information received from the uplink, and downlink. The L1 cache, L2 cache and L3 cache may enable providing information pertaining to the earlier request and also assist the at least one core from the plurality of core 410 by providing information on the previous request.
[0064] Upon caching the information, the queries from the query list are processed by the at least one core from the plurality of core 410. In one embodiment, mapping of the repetitive query to the same core is referred to as pinning. Thereby enabling higher throughput using the same conventional core from the multi-core processor. Further since the L1 cache, L2 cache, and L3 cache usually store processing data of the query, using the already stored processing data by the same core reduces processing time for the query.
[0065] FIG. 5 is a signal flow diagram for uniform distribution of one or more data packets, according to one or more embodiments of the present invention. For the purpose of description, the signal flow diagram is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0066] At step 502, the receiving unit 210 receives the one or more data packets from the UE 102. The one or more data packets is one of the uplink data packet and the downlink data packet. The uplink data packet is received via the first interface and the downlink data packet is received via the second interface. The first interface and the second interface are N3 interface and N6 interface respectively.
[0067] At step 504, the retrieving unit 212 is configured to receive the one or more data packets from the receiving unit 210. Upon receiving the one or more data packets, the retrieving unit 212 retrieves the IP address of the one or more data packets. The IP address of the one or more data packets are retrieved from one of the source address of the PDU of the IP header and the destination address of the IP header. In particular, if the one or more data packets is the uplink data packet, the IP address of the one or more data packets is retrieved from the PDU of the IP header. Alternatively, if the one or more data packets is the downlink data packet, the IP address of the one or more data packets is retrieved from the destination address of the IP header.
[0068] At step 506, the hashing unit 214 is configured to receive the retrieved IP address of the one or more data packets from the retrieving unit 212. Upon receiving the retrieved IP address of the one or more data packets, the hashing unit 214 performs hashing operation on the IP address of each of the one or more retrieved IP addresses. The hashing operation on the IP address of each of the one or more retrieved IP addresses is performed to receive the hash value of the IP address of each of the one or more retrieved IP addresses.
[0069] At step 508, the generating unit 216 is configured to receive the hash value of the IP address of each of the one or more retrieved IP addresses. Upon receiving the hash value, the generating unit 216 generates the query list based on the hash value received from the hashing unit 214. The query list includes the hash to the Rx queue table.
[0070] At step 510, the mapping unit 218 is configured to receive the query list based on the hash value from the generating unit 216. Upon receiving the query list based on hash value, the mapping unit 218 maps the hash value with at least one core from the plurality of cores. Thereby distributing the one or more data packets across the plurality of data cores. The mapping unit 218 is configured to map based on the redundancy check performed on processing history of the previously received data packets.
[0071] At step 512, the transmitting unit 220 is configured to receive the mapped core of the plurality of cores from the mapping unit 218. Upon receiving the mapped core, the transmitting unit 220 transmit queries from the query list to the at least one mapped core of the plurality of cores.
[0072] FIG. 6 is a flow diagram of a method 600 for uniform distribution of one or more data packets, according to one or more embodiments of the present invention. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0073] At step 602, the method 600 includes the step of receiving the one or more data packets from the UE 102 by the receiving unit 210. The one or more data packets is one of the uplink data packet and the downlink data packet. The uplink data packet is received via the first interface and the downlink data packet is received via the second interface, and wherein the first interface is the N3 interface and the second interface is the N6 interface.
[0074] At step 604, the method 600 includes the step of retrieving the IP address of the one or more data packets from one of the source address of the PDU of the IP header and the destination address of the IP header by the retrieving unit 212. The retrieving unit 212 is configured to retrieve the IP address of the one or more data packets from the PDU of the IP header if the one or more data packets is the uplink data packet. Further, the retrieving unit 212 is configured to retrieve the IP address of the one or more data packets from the destination address of the IP header if the one or more data packets is the downlink data packet.
[0075] At step 606, the method 600 includes the step of performing hashing operation on the IP address of each of the one or more retrieved IP addresses to receive the hash value of the IP address of each of the one or more retrieved IP addresses by the hashing unit 214. Upon receiving the hash value of the IP address, the generating unit 216 is configured to generate a query list based on the hash value. The query list includes a hash to an Rx queue table.
[0076] At step 608, the method 600 includes the step of mapping of the hash value with at least one core from the plurality of cores 410 by the mapping unit 218. By doing so, the mapping unit 218 distributes one or more data packets across the plurality of data cores. The mapping unit 218 is configured to map the at least one core from the plurality of cores 410 based on a redundancy check. The redundancy check is performed on processing history of a previously received data packets. Upon mapping the at least one core from the plurality of cores 410, the transmitting unit 220 is configured to transmit the queries from the query list to the at least one mapped core of the plurality of cores 410.
[0077] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to receive the one or more data packets from one or more UE 102. The processor 202 is further configured to retrieve the IP address of the one or more data packets from one of the source address of the PDU of the IP header and the destination address of the IP header. The processor 202 is further configured to perform hashing operation on the IP address of each of the one or more retrieved IP addresses to receive the hash value of the IP address of each of the one or more retrieved IP addresses. The processor 202 is further configured to map the hash value with at least one core from the plurality of cores 410, and thereby distributing the one or more data packets across the plurality of data cores.
[0078] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0079] The present disclosure incorporates technical advancement that the processing capacity of the processor is increased by mapping the hash value with at least one core from the plurality of cores and distributing the one or more data packets across the plurality of data cores. The uniform distribution of data reduces the processing time of the processor resulting in high performance, capacity and throughput from the processor.
[0080] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0081] Communication System- 100
[0082] User Equipment (UE)- 102
[0083] Server- 104
[0084] Network- 106
[0085] System -108
[0086] Processor- 202
[0087] Memory- 204
[0088] User Interface- 206
[0089] Database- 208
[0090] Receiving Unit- 210
[0091] Retrieving Unit- 212
[0092] Hashing unit- 214
[0093] Generating Unit- 216
[0094] Mapping activator- 218
[0095] Transmitting Unit- 220
[0096] Primary processor- 302
[0097] Memory- 304
[0098] Hash Function - 404
[0099] Hash Value- 406
[00100] Hash to Rx queue table – 408
[00101] Plurality of cores- 410
,CLAIMS:CLAIMS:
We Claim:
1. A method (600) of uniform distribution of one or more data packets across a plurality of cores (410), the method (500) comprising the steps of:
receiving, by one or more processors (202), the one or more data packets from one or more User Equipments (UE) (102);
retrieving, by the one or more processors (202), an Internet Protocol (IP) address of the one or more data packets from one of a source address of a Protocol Data Unit (PDU) of an IP header and a destination address of the IP header;
performing, by the one or more processors (202), hashing operation on the IP address of each of the one or more retrieved IP addresses to receive a hash value of the IP address of each of the one or more retrieved IP addresses; and
mapping, by the one or more processors (202), of the hash value with at least one core from the plurality of cores (410), and thereby distributing the one or more data packets across the plurality of data cores.
2. The method (600) as claimed in claim 1, wherein the one or more data packets is one of an uplink data packet and a downlink data packet.
3. The method (600) as claimed in claim 2, wherein the uplink data packet is received via a first interface and the downlink data packet is received via a second interface, and wherein the first interface is an N3 interface and the second interface is an N6 interface.
4. The method (600) as claimed in claim 1, wherein the one or more processors (202) is configured to retrieve the IP address of the one or more data packets from the PDU of the IP header if the one or more data packets is the uplink data packet.
5. The method (600) as claimed in claim 1, wherein the one or more processors (202) is configured to retrieve the IP address of the one or more data packets from the destination address of the IP header if the one or more data packets is the downlink data packet.
6. The method (600) as claimed in claim 1, wherein the method (400) comprises the steps of:
generating, by the one or more processors (202), a query list based on the hash value, wherein the query list includes a hash to a Rx queue table; and
transmitting, by the one or more processors (202), queries from the query list to the at least one mapped core of the plurality of cores (410).
7. The method (600) as claimed in claim 1, wherein the step of mapping is performed based on a redundancy check performed on processing history of a previously received data packets.
8. A system (108) for uniform distribution of one or more data packets across a plurality of cores (410), the system (108) comprising:
a receiving unit (210) configured to receive, the one or more data packets from one or more User Equipments (UE) (102);
a retrieving unit (212) configured to retrieve, an Internet Protocol (IP) address of the one or more data packets from one of a source address of a Protocol Data Unit (PDU) of an IP header and a destination address of the IP header;
a hashing unit (214) configured to perform, hashing operation on the IP address of each of the one or more retrieved IP addresses to receive a hash value of the IP address of each of the one or more retrieved IP addresses; and
a mapping unit (218) configured to map, the hash value (406) with at least one core from the plurality of cores (410), and thereby distributing the one or more data packets across the plurality of data cores.
9. The system (108) as claimed in claim 8, wherein the one or more data packets is one of an uplink data packet and a downlink data packet.
10. The system (108) as claimed in claim 9, wherein the uplink data packet is received via a first interface and the downlink data packet is received via a second interface, and wherein the first interface is an N3 interface and the second interface is an N6 interface.
11. The system (108) as claimed in claim 8, wherein the retrieving unit (212) is configured to retrieve the IP address of the one or more data packets from the PDU of the IP header if the one or more data packets is the uplink data packet.
12. The system (108) as claimed in claim 8, wherein the retrieving unit (212) is configured to retrieve the IP address of the one or more data packets from the destination address of the IP header if the one or more data packets is the downlink data packet.
13. The system (108) as claimed in claim 8, wherein the system (108) comprising:
a generating unit (216) configured to generate, a query list based on the hash value (406), wherein the query list includes a hash to a Rx queue table; and
a transmitting unit (220) configured to transmit, queries from the query list to the at least one mapped core of the plurality of cores (410).
14. The system (108) as claimed in claim 8, wherein the mapping unit (218) is configured to map based on a redundancy check performed on processing history of previously received data packets.
15. A User Equipment (UE) (102), comprising:
one or more primary processors (302) communicatively coupled to one or more processors (202), the one or more primary processors (302) coupled with a memory (304), wherein said memory (304) stores instructions which when executed by the one or more primary processors (302) causes the UE (102) to:
transmit, one or more data packets to the one or more processers (202);
wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1.
| # | Name | Date |
|---|---|---|
| 1 | 202321044576-STATEMENT OF UNDERTAKING (FORM 3) [03-07-2023(online)].pdf | 2023-07-03 |
| 2 | 202321044576-PROVISIONAL SPECIFICATION [03-07-2023(online)].pdf | 2023-07-03 |
| 3 | 202321044576-FORM 1 [03-07-2023(online)].pdf | 2023-07-03 |
| 4 | 202321044576-FIGURE OF ABSTRACT [03-07-2023(online)].pdf | 2023-07-03 |
| 5 | 202321044576-DRAWINGS [03-07-2023(online)].pdf | 2023-07-03 |
| 6 | 202321044576-DECLARATION OF INVENTORSHIP (FORM 5) [03-07-2023(online)].pdf | 2023-07-03 |
| 7 | 202321044576-FORM-26 [11-09-2023(online)].pdf | 2023-09-11 |
| 8 | 202321044576-Proof of Right [22-12-2023(online)].pdf | 2023-12-22 |
| 9 | 202321044576-DRAWING [25-06-2024(online)].pdf | 2024-06-25 |
| 10 | 202321044576-COMPLETE SPECIFICATION [25-06-2024(online)].pdf | 2024-06-25 |
| 11 | Abstract1.jpg | 2024-10-03 |
| 12 | 202321044576-Power of Attorney [11-11-2024(online)].pdf | 2024-11-11 |
| 13 | 202321044576-Form 1 (Submitted on date of filing) [11-11-2024(online)].pdf | 2024-11-11 |
| 14 | 202321044576-Covering Letter [11-11-2024(online)].pdf | 2024-11-11 |
| 15 | 202321044576-CERTIFIED COPIES TRANSMISSION TO IB [11-11-2024(online)].pdf | 2024-11-11 |
| 16 | 202321044576-FORM 3 [25-11-2024(online)].pdf | 2024-11-25 |
| 17 | 202321044576-FORM-9 [10-01-2025(online)].pdf | 2025-01-10 |
| 18 | 202321044576-FORM 18A [13-01-2025(online)].pdf | 2025-01-13 |
| 19 | 202321044576-FER.pdf | 2025-02-20 |
| 20 | 202321044576-FER_SER_REPLY [21-05-2025(online)].pdf | 2025-05-21 |
| 1 | 202321044576_SearchStrategyNew_E_SSER_NEWE_18-02-2025.pdf |