Abstract: ABSTRACT SYSTEMS AND METHODS FOR EXECUTING PAGING OPERATIONS The disclosure provides a system (200B) and a method (400) for executing paging operations. The method (400) includes receiving (402) a paging request from an access and mobility management function (AMF) (220) via a first network interface, processing (404) the paging request to determine a set of cells to be paged, broadcasting (406) the paging request to the set of cells via a second network interface, receiving (408) a response to the paging request from a cell within the set of cells that has successfully paged a user equipment (UE) (108) via the first network interface, initiating (410) a data session between the UE (108) and the AMF (220) upon receiving the response from the cell, and notifying (412) the AMF (220) via a third network interface if no response is received from any cell within the set of cells, indicating that the UE (108) is unreachable. Ref. Fig. 2B
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEMS AND METHODS FOR EXECUTING PAGING OPERATIONS
2. APPLICANT(S)
Name Nationality Address
JIO PLATFORMS LIMITED INDIAN Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
3. PREAMBLE TO THE DESCRIPTION
The following specification particularly describes the invention and the manner in which it is to be performed.
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (herein after referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
TECHNICAL FIELD
[0002] The present disclosure relates generally to a field of wireless communications. More particularly, the present disclosure relates to a system and a method for executing paging operations.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
[0004] The term “paging” used hereinafter in the specification refers to locating a specific user equipment (UE) and informing the UE about incoming data, calls, or other notifications. Paging is essential for maintaining communication and connection with the UE, especially as the UE moves between different cells or coverage areas.
[0005] The term “paging operation” used hereinafter in the specification refers to a network function that involves the process of locating and alerting a specific UE within a network. The paging operation is initiated when it is required to establish a connection with a UE that is currently not in communication.
[0006] The term “Access and Mobility Management Function (AMF)” used hereinafter in the specification refers to a control plane network function in the 5G core network responsible for handling registration and mobility management of UEs within the 5G network.
[0007] The term “paging request” used hereinafter in the specification refers to a request (network operation) to locate a specific UE within a mobile network. The paging request comprises information about the UE, such as its unique identifier (for example, International Mobile Subscriber Identity (IMSI)) and the reason for paging (such as an incoming call or message). The paging request may also comprise a timestamp indicating the time when the paging request was made. The paging request is generated by the AMF when it needs to locate the UE.
[0008] The term “predictive algorithms” refers to computational methods designed to optimize the process of locating and alerting UE by forecasting its potential location and movement data. The predictive algorithm utilizes historical data, real-time information, and statistical techniques to enhance the efficiency and accuracy of paging operation. The predictive algorithms may be artificial intelligence (AI) algorithms and/or machine learning (ML) algorithms
[0009] The term “set of cells” used hereinafter in the specification refers to a group of individual cells (base stations) within a network that collectively cover a specific geographic area where the UE is expected to be.
[0010] The term “first network interface” refers to an Npef interface. The Npef interface is a service-based interface in a network architecture designed to facilitate communication between a system (of the present disclosure) and the other network components, such as the AMF and radio access network (RAN) nodes. The AMF uses the Npef interface to send a paging request to the system. It also enables the system to receive paging responses from the cells contacted during the process. Suppose the system determines that a UE is unreachable after a paging attempt. In that case, the system uses the Npef interface to notify the AMF about the failure, allowing the AMF to take further actions if needed. The term “second network interface” refers to an Ncu interface. The Ncu interface is a service-based interface in a network architecture designed to facilitate communication between the system and RAN nodes for efficient paging operations. The Ncu interface enables the system to send a paging request to RAN nodes. The Ncu interface allows the RAN nodes to send responses back to the system regarding the outcome of the paging attempt.
[0011] The term “third network interface” refers to a Namf interface. The Namf interface is a service-based interface for the core access and mobility management function in a network architecture designed to facilitate communication between the system and the AMF. The Namf interface enables the system to notify the AMF about paging request results.
[0012] The term “data session” used hereinafter in the specification refers to a period during which a UE is connected to a telecommunications network and actively exchanges data with it. The data session involves establishing, maintaining, and terminating a connection for transmitting data between the UE and the network.
[0013] The term “predetermined time period” used hereinafter in the specification refers to the duration for which a paging request is considered valid. If a UE does not respond within this timeframe, the paging request may be considered unsuccessful.
[0014] These definitions are in addition to those expressed in the art.
BACKGROUND
[0015] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0016] Wireless communication technology has rapidly evolved over the past few decades. A First Generation (1G) of the wireless communication technology was an analog technology that offered only voice services. Further, when a Second-Generation (2G) technology was introduced, text messaging and data services became possible. A Third Generation (3G) technology marked an introduction of a high-speed internet access, a mobile video calling, and location-based services. A Fourth Generation (4G) technology revolutionized wireless communication in terms of faster data speeds, improved network coverage, and security. Currently, a Fifth Generation (5G) technology is being deployed, with even faster data speeds, lower latency, and an ability to connect multiple devices simultaneously. With each generation, the wireless communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0017] With the growing density of high-mobility User Equipment (UEs) in 5G networks, the challenges associated with tracking and maintaining seamless connectivity for these UEs have become increasingly prominent. In a 5G network, the Access and Mobility Management Function (AMF) plays a crucial role in managing paging procedures and mobility registration updates, which are essential for locating UEs and ensuring continuous communication.
[0018] Paging is a process used to locate UEs that are in an idle state. When a UE is not actively engaged in data transmission, it enters a power-saving mode to conserve battery life. To deliver new data or notifications to the UE, the network sends out paging messages or requests to determine the UE’s location and prompt it to re-establish a connection. The AMF is responsible for initiating and managing these paging procedures.
[0019] However, as the number of UEs continues to grow and their mobility data become more complex, the existing paging and mobility management processes managed by the AMF face significant challenges. The increased density of UEs and the high volume of paging requests overburdens the AMF, leading to potential inefficiencies such as delays in locating UEs, increased signaling overhead, and a higher risk of connection drops or delays.
[0020] These challenges underscore the need for more efficient paging and mobility management solutions that can better handle the demands of a high-density UE environment, ensuring that users experience reliable and seamless connectivity without undue burden on the network infrastructure.
[0021] There is, therefore, a need in the art to provide a system and a method that can overcome the shortcomings of the existing prior arts.
SUMMARY
[0022] In an exemplary embodiment, a system for executing paging operations is disclosed. The system includes a paging network function comprising a memory and a processing engine coupled to the memory. The processing engine includes a receiving unit, a determining unit, a broadcasting unit and a communication unit. The receiving unit is configured to receive a paging request from an access and mobility management function (AMF) via a first network interface. The determining unit is configured to process the paging request to determine a set of cells to be paged. The broadcasting unit is configured to broadcast the paging request to the set of cells via a second network interface. The receiving unit configured to receive a response to the paging request from a cell within the set of cells that has successfully paged a user equipment (UE) via the first network interface. The processing engine is configured to initiate a data session between the UE and the AMF upon receiving the response from the cell. The communication unit is configured to notify the AMF via a third network interface if no response is received from any cell among the set of cells, indicating that the UE is unreachable.
[0023] In an embodiment, the first network interface is an Npef interface, the second network interface is an Ncu interface, and the third network interface is an Namf interface.
[0024] In an embodiment, the processing engine is configured to determine the set of cells to be paged based on network conditions, device mobility data, device behavior, and historical data of paging operations.
[0025] In an embodiment, the paging request broadcasted to the set of cells includes a unique identifier for the UE and a timestamp of the paging request.
[0026] In an embodiment, the processing engine is further configured to generate a paging data report detailing the status of the paging request and the reason for any failure to reach the UE.
[0027] In an embodiment, the broadcasting unit is further configured to retry broadcasting the paging request to the set of cells via the second network interface if no response is received within a predetermined time period.
[0028] In an embodiment, the determining unit is configured to apply learnings periodically based on feedback from previous paging operations to improve the determination of the cells.
[0029] In another exemplary embodiment, a method for executing paging operations is disclosed. The method includes receiving a paging request from an access and mobility management function (AMF) via a first network interface and processing the paging request to determine a set of cells to be paged. The method further includes broadcasting the paging request to the set of cells via a second network interface and receiving a response to the paging request from a cell within the set of cells that has successfully paged a user equipment (UE) via the first network interface. The method further includes initiating a data session between the UE and the AMF upon receiving the response from the cell; and notifying the AMF via a third network interface if no response is received from any cell within the set of cells, indicating that the UE is unreachable.
[0030] In an embodiment, the first network interface is an Npef interface, the second network interface is an Ncu interface, and the third network interface is an Namf interface.
[0031] In an embodiment, the set of cells to be paged are determined based on network conditions, device mobility data, device behavior, and historical data of paging operations.
[0032] In an embodiment, the paging request broadcasted to the set of cells includes a unique identifier for the UE and a timestamp of the paging request.
[0033] In an embodiment, the method further comprising generating a paging data report detailing the status of the paging request and the reason for any failure to reach the UE.
[0034] In an embodiment, the method further comprising retrying broadcasting the paging request to the set of cells via the second network interface if no response is received within a predetermined time period.
[0035] In an embodiment, the method further comprising apply learnings periodically based on feedback from previous paging operations, to improve the determination of the cells.
[0036] In an exemplary embodiment, a computer program product comprising a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform a method for receiving a paging request from an access and mobility management function (AMF) via a first network interface, processing the paging request to determine a set of cells to be paged, broadcasting the paging request to the set of cells via a second network interface, receiving a response to the paging request from a cell within the set of cells that has successfully paged a user equipment (UE) via the first network interface, initiating a data session between the UE and the AMF upon receiving the response from the cell, and notifying the AMF via a third network interface if no response is received from any cell within the set of cells, indicating that the UE is unreachable.
[0037] In an exemplary embodiment, a User Equipment (UE) communicatively coupled with a network is disclosed. The coupling includes a step of receiving, by the network, a connection request from the UE. The coupling includes a step of sending, by the network, an acknowledgment of the connection request to the UE. The coupling includes a step of transmitting a plurality of signals in response to the connection request. Based on the connection request, an execution of paging operations is performed.
[0038] The foregoing general description of the illustrative embodiments and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
OBJECTS OF THE PRESENT DISCLOSURE
[0039] An object of the present disclosure is to provide a system and a method for executing paging operations. The system and the method dynamically adapt paging strategies based on device mobility data, user behavior, network conditions, and other contextual information.
[0040] Another object of the present disclosure is to provide a system and a method that coordinates with multiple access points and selects the appropriate connectivity option for a target device (for example, user equipment (UE)).
[0041] Yet another object of the present disclosure is to provide a system and a method that reduces latency in executing paging operations. This will assist in minimizing the latency for life-critical systems.
[0042] Yet another object of the present disclosure is to provide a system and a method that enhances paging techniques and strategies.
[0043] Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
[0044] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0045] FIG. 1 illustrates an exemplary network architecture for implementing a system for executing paging operations, in accordance with an embodiment of the present disclosure.
[0046] FIG. 2A illustrates an exemplary block diagram of a paging network function configured for executing paging operations, in accordance with an embodiment of the present disclosure.
[0047] FIG. 2B illustrates an example of network architecture for executing paging operations, in accordance with an embodiment of the present disclosure.
[0048] FIG. 3 illustrates an exemplary flow diagram for executing paging operations, in accordance with an embodiment of the disclosure.
[0049] FIG. 4 illustrates an exemplary process flow of a method for executing paging operations, in accordance with an embodiment of the present disclosure.
[0050] FIG. 5 illustrates an example computer system in which or with which the embodiments of the present disclosure may be implemented.
[0051] The foregoing shall be more apparent from the following more detailed description of the disclosure.
LIST OF REFERENCE NUMERALS
100 – Network architecture
102 – System
104 – Network
108-1, 108-2…108-N – Plurality of User Equipments
110-1, 110-2…110-N – Plurality of Users
200A – Block Diagram
202 – Processor(s)
204 – Memory
206 – Plurality of Interfaces
207 – Processing Engine
208 – Receiving Unit
210 – Determining Unit
212 – Broadcasting Unit
214 – Communication Unit
215 – Database
200B – Network
216-1, 216-2…216-M – Plurality of cells
218 – Central Location
220 – Access and mobility management function (AMF)
222-1, 222-2…222-P – Plurality of Network functions (NFs)
300 – Flow diagram
400 – Method
500 – Computer System
510 – External Storage Device
520 – Bus
530 – Main Memory
540 – Read Only Memory
550 – Mass Storage Device
560 – Communication Port
570 – Processor
DETAILED DESCRIPTION
[0052] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example embodiments of the present disclosure are described below, as illustrated in various drawings in which like reference numerals refer to the same parts throughout the different drawings.
[0053] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0054] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0055] Also, it is noted that individual embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0056] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other elements.
[0057] Reference throughout this specification to “one embodiment” or “an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0058] The terminology used herein is to describe particular embodiments only and is not intended to be limiting the disclosure. As used herein, the singular forms “a,” “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any combinations of one or more of the associated listed items. It should be noted that the terms “mobile device,” “user equipment”, “user device”, “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the invention. These terms are not intended to limit the scope of the invention or imply any specific functionality or limitations on the described embodiments. The use of these terms is solely for convenience and clarity of description. The invention is not limited to any particular type of device or equipment, and it should be understood that other equivalent terms or variations thereof may be used interchangeably without departing from the scope of the invention as defined herein.
[0059] As used herein, an “electronic device”, or “portable electronic device”, or “user device” or “communication device” or “user equipment” or “device” refers to any electrical, electronic, electromechanical and computing device. The user device is capable of receiving and/or transmitting one or parameters, performing function/s, communicating with other user devices, and transmitting data to the other user devices. The user equipment may have a processor, a display, a memory, a battery, and an input-means such as a hard keypad and/or a soft keypad. The user equipment may be capable of operating on any radio access technology including but not limited to IP-enabled communication, Zig Bee, Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, Wi-Fi, Wi-Fi direct, etc. For instance, the user equipment may include, but not limited to, a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other device as may be obvious to a person skilled in the art for implementation of the features of the present disclosure.
[0060] Further, the user device may also comprise a “processor” or “processing unit” includes processing unit, wherein processor refers to any logic circuitry for processing instructions. The processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a digital signal processing (DSP) core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor is a hardware processor.
[0061] As portable electronic devices and wireless technologies continue to improve and grow in popularity, the advancing wireless technologies for data transfer are also expected to evolve and replace the older generations of technologies. In the field of wireless data communications, the dynamic advancement of various generations of cellular technology are also seen. The development, in this respect, has been incremental in the order of second generation (2G), third generation (3G), fourth generation (4G), and now fifth generation (5G), and more such generations are expected to continue in the forthcoming time.
[0062] While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
[0063] With the growing density of high mobility User Equipments (UEs) in 5G networks, the challenges associated with tracking and maintaining seamless connectivity for these UEs have become increasingly prominent. In a 5G network, the Access and Mobility Management Function (AMF) plays a crucial role in managing paging procedures and mobility registration updates, which are essential for locating UEs and ensuring continuous communication.
[0064] Paging is a process used to locate UEs that are in an idle state. When a UE is not actively engaged in data transmission, it enters a power-saving mode to conserve battery life. To deliver new data or notifications to the UE, the network sends out paging messages or requests to determine the UE’s location and prompt it to re-establish a connection. The AMF is responsible for initiating and managing these paging procedures.
[0065] However, as the number of UEs continues to grow and their mobility patterns become more complex, the existing paging and mobility management processes managed by the AMF face significant challenges. The increased density of UEs and the high volume of paging requests overburdens the AMF, leading to potential inefficiencies such as delays in locating UEs, increased signaling overhead, and a higher risk of connection drops or delays.
[0066] These challenges underscore the need for more efficient paging and mobility management solutions that can better handle the demands of a high-density UE environment, ensuring that users experience reliable and seamless connectivity without undue burden on the network infrastructure.
[0067] Accordingly, there is a need for systems and methods for executing paging operations in an efficient manner.
[0068] The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing a system and a method for executing paging operations. The system and the method leverage artificial intelligence (AI) and/or machine learning (ML) algorithms to dynamically adapt paging strategies based on device mobility patterns, user behavior, network conditions, and other contextual information.
[0069] The various embodiments throughout the disclosure will be explained in more detail with reference to FIG. 1- FIG. 5.
[0070] FIG. 1 illustrates an exemplary network architecture (100) for implementing a paging network function (102) for executing paging operations in a network, in accordance with an embodiment of the present disclosure. In an embodiment, the network (i.e., a network (104)), for example, may be a telecommunication network, such as a Fourth Generation (4G) network, a Fifth Generation (5G) network, a Sixth Generation (6G) network, and the like. In an embodiment, the network architecture (100) may include one or more computing devices or User Equipment (UEs) (108-1), (108-2)…(108-N) associated with one or more users (110-1), (110-2)…(110-N) in an environment. A person of ordinary skill in the art will understand that one or more users (110-1), (110-2)…(110-N) may be individually referred to as the user (110) and collectively referred to as the users (110). Similarly, a person of ordinary skill in the art will understand that one or more UEs (108-1), (108-2)…(108-N) may be individually referred to as the UE (108) and collectively referred to as the UEs (108). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the disclosure. Although three UEs (108) are depicted in FIG. 1, however, any number of the UEs (108) may be included without departing from the scope of the ongoing description.
[0071] In an embodiment, the UE 108 may include smart devices operating in a smart environment, for example, an Internet of Things (IoT) system. In such an embodiment, the UE (108) may include, but is not limited to, smart phones, smart watches, smart sensors (e.g., a mechanical sensor, a thermal sensor, an electrical sensor, a magnetic sensor, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart televisions (TVs), computers, smart security systems, smart home systems, other devices for monitoring or interacting with or for the user (110) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the UE (108) may include, but is not limited to, intelligent, multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
[0072] In an embodiment, the UE (108) may include, but is not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the UE (108) may include, but is not limited to, any electrical, electronic, electro-mechanical, or an equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, a laptop, a general-purpose computer, a desktop, a personal digital assistant, a tablet computer, a mainframe computer, or any other computing device. Further, the UE (108) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (110) or an entity such as a touch pad, a touch enabled screen, an electronic pen, and the like. A person of ordinary skill in the art will appreciate that the UE (108) may not be restricted to the mentioned devices and various other devices may be used.
[0073] In FIG. 1, the UE (108) may communicate with the paging network function (102) through the network (104). In particular, the UE (108) may be communicatively coupled with the network (104). The coupling including steps of receiving, by the network (104), a connection request from the UE (108). Upon receiving the connection request, the coupling including steps of sending, by the network (104), an acknowledgment of the connection request to the UE (108). Further, the coupling including steps of transmitting a plurality of signals in response to the connection request. The plurality of signals is responsible for communicating with the paging network function (102) to execute the paging operations in the network (104).
[0074] In an embodiment, the network (104) may include at least one of the 4G network, the 5G network, the 6G network, or the like. The network (104) may enable the UE (108) to communicate with other devices in the network architecture (100) and/or with the paging network function (102). The network (104) may include a wireless card or some other transceiver connection to facilitate this communication. In another embodiment, the network (104) may be implemented as, or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. In another embodiment, the network (104) includes, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
[0075] Although FIG. 1 shows exemplary components of the network architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
[0076] FIG. 2A illustrates a block diagram (200A) of the paging network function (102) for executing paging operations, in accordance with embodiments of the present disclosure. In one example embodiment, the paging network function (102) may be implemented at an edge of the network.
[0077] In an aspect, the paging network function (102) may include one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the paging network function (102). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as Random Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like.
[0078] Referring to FIG. 2A, the paging network function (102) may include an interface(s) (206). The interface(s) (206) may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) (206) may facilitate communication to/from the paging network function (102). The interface(s) (206) may also provide a communication pathway for one or more components of the paging network function (102). Examples of such components include, but are not limited to, processing unit/engine(s) (207) and a database (215).
[0079] In an embodiment, the processing unit/engine(s) (207) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (207). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (207) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (207) may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (207). In such examples, the paging network function (102) may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the paging network function (102) and the processing resource. In other examples, the processing engine(s) (207) may be implemented by electronic circuitry.
[0080] In an embodiment, the processing engine (207) may include a plurality of units. The plurality of units of the processing engine (207) may include, but is not limited to, a receiving unit (208), a determining unit (210), a broadcasting unit (212), and a communication unit (214). These units are explained in detail in conjunction with FIG. 2B for a better understanding.
[0081] In an embodiment, the database (215) may include data that may be either stored or generated as a result of functionalities implemented by any of the components of the processor (202) or the processing engine (207). In an embodiment, the database (215) may be separate from the paging network function (102). In an embodiment, the database (215) may be indicative of including, but not limited to, a relational database, a distributed database, a cloud-based database, or the like. Although the database (215) is shown to be implemented within the paging network function (102), in some embodiments, the database (215) may be external to the paging network function (102).
[0082] FIG. 2B illustrates an example of network architecture (200B) for executing paging operations, in accordance with an embodiment of the present disclosure.
[0083] As shown in FIG. 2B, the network architecture (200B) includes the paging network function (102), a plurality of cells (216-(1-M)), a central location (218), an AMF (220), and a plurality of NFs (222-(1-P)). The AMF (220) may be deployed in the central location (218). In an implementation, the paging network function (102), the plurality of cells (216-(I-M)), and the AMF (220) may communicate with each other and other network architecture components. Additionally, the network architecture (200B) may include a plurality of UEs (not shown in FIG. 2B).
[0084] Although, a single AMF (220) is shown in FIG. 2B, there may be more than one AMF deployed in the network architecture (200B). In examples, the plurality of NFs (222-(1-P)) may include 5G-Core Network (5GCN) NFs such as session management functions (SMFs), short message service functions (SMSFs), policy control functions (PCFs), etc. In one example, the paging network function (102) may be implemented at the edge of the network.
[0085] In operation, the receiving unit (208) may be configured to receive a paging request from the AMF (220) via a first network interface. The first network interface may be an Npef interface. In examples, for the Npef interface, the paging network function (102) acts as a producer, and AMF (220) and radio access network (RAN) act as consumers. In an example, the paging request may be initiated to locate a user equipment (UE) (108) in the network (104). The paging request includes a unique identifier for the UE (108) and a timestamp of the paging request. The timestamp indicates the exact time when the paging request was initiated by the AMF (220). This helps in tracking and coordinating the paging process, especially if multiple paging attempts are made. In an implementation, when the AMF (220) needs to reach the UE (108), for example, when it has incoming data, a call, or needs to re-establish a session, the AMF (220) sends the paging request to the paging network function (102) via the first network interface
[0086] In an implementation, in response to receiving the paging request from the AMF (220), the determining unit (210) may be configured to process the paging request to determine an optimized set of cells to be paged. In examples, the optimized set of cells may include one or more cells from amongst the plurality of cells 216-(1-M). For instance, the optimized set of cells may refer to cells where the UE (108) could be potentially located. In an implementation, the determining unit (210) may be configured to process the paging request using one or more predictive algorithms. In examples, the one or more predictive algorithms include at least one of artificial intelligence (AI) and machine learning (ML) algorithms, linear regression, logistic regression, decision trees, random forests, support vector machines (SVM), K-nearest neighbors (KNN), neural networks, Bayesian networks, etc. The determining unit (210) may be configured to determine the set of cells to be paged based on network conditions, device mobility data such as mobility patterns, device behavior, and historical data of paging operations. In an implementation, the determining unit (210) may process the paging request using at least one of an artificial intelligence (AI) and/or mac¬¬hine language (ML) algorithm to determine the optimized set of cells. The determining unit (210) is configured to periodically update the one or more predictive algorithms based on feedback from previous paging operations. In examples, the predictive algorithms may be trained on historical data to predict UE movement and behavior.
[0087] In an example, for using the AI/ML to make predictive decisions, the determining unit (210) may collect UE data that is fed into an AI/ML algorithm to make predictive decisions. Some features that would be identified in the data may include historical paging data, user mobility patterns, network load, time of day, signal strength and coverage maps, and device type and behaviour, etc. The historical paging data may refer to data that includes where the UE was previously located when a paging request occurred. The user mobility patterns include location history, movement history, UE movement speed, and direction. The network load includes availability and congestion levels at various cells. The time of day includes identifying user movement pattens in predictable patterns during the day (home, work, etc.). The signal strength and coverage maps include propagation patterns of the signal across different cells. The device type and behavior may include information about the UE's activity, such as idle or active. Further, one or more AI/ML models may be used to process the paging request and predict the optimal cells. In examples, reinforcement learning (RL), supervised learning (that includes predictive algorithms), clustering algorithms, etc., may be used. The RL may use continuously learning for fine tuning cell selection strategy based on feedback from the network environment. For supervised learning, the determination unit (210) may use historical data to train predictive model that uses forecasting cells for paging. For clustering algorithms, exemplary techniques such as K-means or density-based spatial clustering of applications with noise (DBSCAN) may be used, which involves identifying similar locations or behaviors to form clusters of cells where a UE device is most likely to be found.
[0088] In aspects, optimization layer may be used to refine selection based on factors such as network load, energy efficiency and/or multifactor balance, such as coverage vs. load balancing. In implementation, data collection may be performed that includes historical paging, mobility, network performance data, etc. The collected data may be pre-processed which includes clean the data, transform the data into a common format, and identifying features (e.g., device behavior patterns, location history). One or more AI/ML models may be chosen that can include using a supervised learning algorithm (like Random Forest or XGBoost) or an RL agent. The one or more AI/ML models may be trained on historical paging success/failure data and associated cell sets. The one or more AI/ML models may be configured to perform prediction for each incoming paging request. For example, the one or more AI/ML models may predict the set of cells most likely to reach the device based on the trained model. One or more optimization algorithm (e.g., Genetic Algorithm) ,ay be applied to refine the set of cells considering current network load, coverage overlap, energy consumption, etc. Post optimization, one or more AI/ML models that performed well may be deployed. In example, the selected AI/ML model may be integrated into the paging system in real-time for ongoing optimization.
[0089] The results of the selected AI/ML models may be evaluated based on key performance indicators (KPIs) like paging success rate, signalling overhead, network resource usage, etc. Further, the selected AI/ML models may be deployed in a real-world scenario alongside traditional paging strategies and compare performance. In aspects, the selected AI/ML models may be fine-tuned by continuously feeding successful/unsuccessful paging attempts back into the selected AI/ML models to improve accuracy over time (especially useful with RL). Based on the feedback, parameter tuning may be performed to adjust performance.
[0090] In an implementation, the determining unit (210) may collect real-time data on network load, signal strength, and congestion levels in the plurality of cells 216-(1-M). The determining unit (210) may further analyze historical movement patterns and current location data of the UE (108) to determine likely locations. Additionally, the determining unit (210) may track UE usage patterns, such as active times, application types, and typical behavior. The determining unit (210) may review past paging operations to identify trends and success rates in different cells 216-(1-M). Based on the above information, the determining unit (210) may determine the optimized set of cells to be paged. In an implementation, the determining unit (210) may rank the plurality of cells 216-(1-M) based on criteria such as signal strength, historical paging success, and predicted UE location. Further, the determining unit (210) may select the set of cells from amongst the plurality of cells 216-(1-M) based on the ranking to receive the paging request.
[0091] In an implementation, the broadcasting unit 212 may be configured to broadcast the paging request to the set of cells via a second network interface. The second network interface may be an Ncu interface. For the Ncu interface, the paging network function (102) is configured to act as a consumer, and RAN is configured to act as consumer. In examples, each cell that receives the paging request may perform its own paging operation. For example, each cell may attempt to locate the UE (108) based on the information included in the paging request and check if the UE (108) is reachable within its coverage area.
[0092] After the paging request has been broadcasted, the paging network function (102) may wait for responses from the cells. Cells that successfully page the UE (108) may send a response back to the paging network function (102). In an implementation, in response to broadcasting the paging request to the set of cells, the communication unit (214) may receive a response to the paging request from a cell within the set of cells that has successfully paged the UE (108). In an implementation, the communication unit (214) may receive a success response from the cell via the first interface (i.e., the Npef interface). In an implementation, the determining unit (210) may process the response to determine if the UE (108) has been successfully located and paged. Upon receiving the response from the cell, the communication unit (214) may initiate a data session between the UE (108) and the AMF (220).
[0093] According to an implementation, if no response is received from any cell within the set of cells, the communication unit (214) may notify the AMF (220), indicating that the UE (108) is unreachable. In an example, the communication unit (214) may notify the AMF (220) via a third network interface. In an example, the third network interface may be an Namf interface. The broadcasting unit (212) is further configured to retry broadcasting the paging request to the set of cells via the second network interface if no response is received within a predetermined time period. The predetermined time period may be the duration for which the paging request is considered valid. If the UE (108) does not respond within this timeframe, the paging request may be considered unsuccessful.
[0094] The determining unit (210) is further configured to generate a paging data report detailing the status of the paging request and the reason for any failure to reach the UE (108). In examples, the paging data report may include information about the paging request, such as the unique identifier of the UE (108), the timestamp of the paging request, and the set of cells to which the paging request was broadcasted. The paging data report may also include a summary of the responses received from the cells. This includes which cells successfully paged the UE (108) and which did not respond. If the paging request was unsuccessful in reaching the UE (108), the paging data report may include the reasons for failure. This may include problems such as network congestion, cell overload, or connectivity issues, specific issues related to individual cells, such as malfunction or poor signal conditions, potential reasons related to the UE (108) itself, for example being out of coverage or turned off.
[0095] According to an implementation, the UE (108) that has been successfully paged may initiate a connection establishment procedure with the AMF (220). In examples, the UE (108) may establish a session with the AMF (220). In an example, the session may be a voice session or a data session. Once the session is established, the pending data/signaling may be delivered to the UE (108). In an example, the pending data/signaling may include one or more messages or incoming calls to be routed to the UE (108).
[0096] FIG. 3 illustrates an example flow diagram (300) for executing paging operations, in accordance with an embodiment of the present disclosure.
[0097] At step (302) of the flow diagram (300), the processing engine (107) may receive a paging request from the AMF (220). In an example, the paging request may be initiated to locate a user equipment (UE) (108) in the network (104). In an implementation, the processing engine (107) may receive the paging request from the AMF (220) via the Npef interface.
[0098] At step (304) of the flow diagram (300), the processing engine (107) may process the paging request to determine an optimized set of cells to be paged. In examples, the optimized set of cells may include one or more cells from amongst the plurality of cells 216-(1-P). For instance, the optimized set of cells may refer to cells where the UE (108) could be potentially located. In an implementation, the processing engine (107) may process the paging request using artificial intelligence (AI) and/or mac¬¬hine language (ML) algorithms to determine the optimized set of cells.
[0099] At step (306) of the flow diagram (300), the processing engine (107) may broadcast the paging request to the optimized set of cells. In an implementation, the processing engine (107) may broadcast the paging request to the optimized set of cells via the Ncu interface.
[00100] At step (308) of the flow diagram (300), the processing engine (107) may receive a success response to the paging request from a cell that successfully paged the UE (108). In an example, the cell may belong to the optimized set of cells. In an implementation, the processing engine (107) may receive the success response from the cell via the Npef interface.
[00101] According to an implementation, if there is no response to the paging request from the cells, the flow diagram (300) proceeds to step (310) (‘No’ Branch). If there is a success response to the paging request from the cell that successfully paged the UE (108), then the flow diagram (300) proceeds to step (312) (‘Yes’ Branch).
[00102] At step (310) of the flow diagram (300), if there is no response to the paging request from any of the cells, the processing engine (107) may notify the AMF (220) that the UE (108) is unreachable (for example, due to connection failure or the UE (108) is switched off). In an example, the processing engine (107) may notify the AMF (220) that the UE (108) is unreachable using a Namf interface.
[00103] At step (312) of the flow diagram (300), the UE (108) that has been successfully paged may initiate a connection setup procedure with the AMF (220).
[00104] At step (314) of the flow diagram (300), a session may be established between the UE (108) and the AMF (220).
[00105] At step (316) of the flow diagram (300), an incoming call or one or more messages may be routed to the UE (108).
[00106] FIG. 4 illustrates an exemplary process flow of a method (400) for executing paging operations, in accordance with an embodiment of the present disclosure.
[00107] At step (402), the method (400) includes receiving, by the receiving unit (208) of the paging network function (102), a paging request from the AMF (220) via a first network interface. the paging request broadcasted to the set of cells includes a unique identifier for the UE (108) and a timestamp of the paging request. The first network interface is the Npef interface.
[00108] At step (404), the method (400) includes processing, by the determining unit (207), the paging request to determine a set of cells to be paged. In one example, the determining unit (207) may use one or more predictive algorithms. The one or more predictive algorithms include at least one of artificial intelligence (AI) and machine learning (ML) algorithms. In examples, the set of cells to be paged are determined based on network conditions, device mobility data such as mobility patterns, device behavior, and historical data of paging operations. In examples, the processing engine (207) may periodically update the one or more predictive algorithms based on feedback from previous paging operations.
[00109] At step (406), the method (400) includes broadcasting, by the broadcasting unit (212), the paging request to the set of cells via a second network interface. The second network interface is the Ncu interface.
[00110] At step (408), the method (400) includes receiving, by the receiving unit (208), a response to the paging request from a cell within the set of cells that has successfully paged the UE (108) via the first network interface.
[00111] At step (410), the method (400) includes initiating, by the communication unit (214), a data session between the UE (108) and the AMF (220) upon receiving the response from the cell.
[00112] At step (412), the method (400) includes notifying, by the communication unit (214), the AMF (220) via a third network interface if no response is received from any cell within the set of cells, indicating that the UE (108) is unreachable. The third network interface is the Namf interface.
[00113] In an implementation, the processing engine (207) may retry broadcasting the paging request to the set of cells via the second network interface if no response is received within a predetermined time period. The processing engine (207) may generate a paging data report detailing the status of the paging request and the reason for any failure to reach the UE (108).
[00114] In an exemplary embodiment, a User Equipment (UE) (108) communicatively coupled with a network (104) is disclosed. The coupling includes a step of receiving, by the network (104), a connection request from the UE (108). The coupling includes a step of sending, by the network (104), an acknowledgment of the connection request to the UE (108). The coupling includes a step of transmitting a plurality of signals in response to the connection request. Based on the connection request, an execution of paging operations is performed.
[00115] In an exemplary embodiment, a computer program product comprising a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform a method for receiving (402) a paging request from an access and mobility management function (AMF) (220) via a first network interface, processing (404) the paging request using one or more predictive algorithms to determine a set of cells to be paged, broadcasting (406) the paging request to the set of cells via a second network interface, receiving (408) a response to the paging request from a cell within the set of cells that has successfully paged a user equipment (UE) (108) via the first network interface, initiating (410) a data session between the UE (108) and the AMF (220) upon receiving the response from the cell, and notifying (412) the AMF (220) via a third network interface if no response is received from any cell within the set of cells, indicating that the UE (108) is unreachable.
[00116] FIG. 5 illustrates an exemplary computer system (500) in which or with which embodiments of the present disclosure may be implemented. As shown in FIG. 5, the computer system (500) may include an external storage device (510), a bus (520), a main memory (530), a read-only memory (540), a mass storage device (550), communication port(s) (560), and a processor (570). A person skilled in the art will appreciate that the computer system (500) may include more than one processor and communication ports. The processor (570) may include various modules associated with embodiments of the present disclosure. The communication port(s) (560) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port(s) (560) may be chosen depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (500) connects.
[00117] The main memory (530) may be random access memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (540) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or Basic Input/Output System (BIOS) instructions for the processor (570). The mass storage device (550) may be any current or future mass storage solution, which can be used to store information and/or instructions. The mass storage device (550) includes, but is not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs, a Redundant Array of Independent Disks (RAID) storage, e.g. an array of disks.
[00118] The bus (520) communicatively couples the processor (570) with the other memory, storage, and communication blocks. The bus (520) may be, e.g. a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system (500).
[00119] Optionally, operator and administrative interfaces, e.g., a display, keyboard, joystick, and a cursor control device, may also be coupled to the bus (520 to support direct operator interaction with the computer system (500). Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) (560). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system (500) limit the scope of the present disclosure.
[00120] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions, or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
[00121] The method and system of the present disclosure may be implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
[00122] While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.
[00123] The present disclosure provides technical advancement by providing the system and the method for executing paging operations efficiently. In an aspect, as the resources will be deployed over edge infrastructure, the footprint may be less. As the footprint is less, the system can be dedicatedly deployed for less paging requirements such as machine-to-machine (M2M) and Internet of Things (IoT) use cases. This results in efficient energy management. Additionally, since the system and the method leverage artificial intelligence (AI) and/or machine learning (ML) algorithms, the system and the method can dynamically adapt paging strategies based on device mobility patterns, user behavior, network conditions, and other contextual information to optimize paging strategies. According to aspects of the present disclosure, the system and the method can be scaled independently for each edge location based on the device ecosystem, paging patterns, etc. Further, since the system and the method are deployed close to the plurality of cells, the latency in the execution of the paging request is significantly reduced. Additionally, by separating paging from the AMF and integrating advanced paging techniques encourages innovation and allows developers from various backgrounds to contribute to and enhance paging strategies.
ADVANTAGES OF THE PRESENT DISCLOSURE
[00124] The system and the method reduce the energy footprint compared to centralized solutions, making it suitable for low-paging use cases like M2M and IoT.
[00125] The system and the method utilize AI and ML algorithms to dynamically adapt paging strategies based on device mobility, user behavior, and network conditions, enhancing the efficiency of paging operations.
[00126] The system and the method are adapted to handle the complexity of ultra-dense networks with numerous small cells and access points, optimizing connectivity for high mobility users.
[00127] The system and the method can be scaled independently at each edge location according to needs and paging patterns, providing flexibility and resource optimization.
,CLAIMS:Claims
We claim:
1. A system (200B) for executing paging operations, the system (200B) comprising:
a paging network function (102) comprising a memory (204) and a processing engine (207) coupled to the memory (204), the processing engine (207) comprising:
a receiving unit (208) configured to receive a paging request from an access and mobility management function (AMF) (220) via a first network interface;
a determining unit (210) configured to process the paging request to determine a set of cells to be paged;
a broadcasting unit (212) configured to broadcast the paging request to the set of cells via a second network interface;
the receiving unit (208) configured to receive a response to the paging request from a cell within the set of cells that has successfully paged a user equipment (UE) (108) via the first network interface;
a communication unit (214) configured to:
initiate a data session between the UE (108) and the AMF (220) upon receiving the response from the cell; and
notify the AMF (220) via a third network interface if no response is received from any cell among the set of cells, indicating that the UE (108) is unreachable.
2. The system (200B) as claimed in claim 1, wherein the first network interface is an Npef interface, the second network interface is an Ncu interface, and the third network interface is an Namf interface.
3. The system (200B) as claimed in claim 1, wherein the processing engine (207) is configured to determine the set of cells to be paged based on network conditions, device mobility data, device behavior, and historical data of paging operations.
4. The system (200B) as claimed in claim 1, wherein the paging request broadcasted to the set of cells includes a unique identifier for the UE (108) and a timestamp of the paging request.
5. The system (200B) as claimed in claim 1, wherein the processing engine (207) is further configured to generate a paging data report detailing a status of the paging request and a reason for any failure to reach the UE (108).
6. The system (200B) as claimed in claim 1, wherein the broadcasting unit (212) is further configured to retry broadcasting the paging request to the set of cells via the second network interface if no response is received within a predetermined time period.
7. The system (200B) as claimed in claim 1, wherein the determining unit (210) is configured to apply learnings periodically based on feedback from previous paging operations, to improve the determination of the cells.
8. A method (400) for executing paging operations, the method (400) comprising:
receiving (402), by a receiving unit (208), a paging request from an access and mobility management function (AMF) (220) via a first network interface;
processing (404), by a determining unit (210), the paging request to determine a set of cells to be paged;
broadcasting (406), by a broadcasting unit (212), the paging request to the set of cells via a second network interface;
receiving (408), by the receiving unit (208), a response to the paging request from a cell within the set of cells that has successfully paged a user equipment (UE) (108) via the first network interface;
initiating (410), by a communication unit (214), a data session between the UE (108) and the AMF (220) upon receiving the response from the cell; and
notifying (412), by the communication unit (214), the AMF (220) via a third network interface if no response is received from any cell within the set of cells, indicating that the UE (108) is unreachable.
9. The method (400) as claimed in claim 8, wherein the first network interface is an Npef interface, the second network interface is an Ncu interface, and the third network interface is an Namf interface.
10. The method (400) as claimed in claim 8, wherein the set of cells to be paged are determined based on network conditions, device mobility data, device behavior, and historical data of paging operations.
11. The method (400) as claimed in claim 8, wherein the paging request broadcasted to the set of cells includes a unique identifier for the UE (108) and a timestamp of the paging request.
12. The method (400) as claimed in claim 8, further comprising generating, by the processing engine (207), a paging data report detailing a status of the paging request and a reason for any failure to reach the UE (108).
13. The method (400) as claimed in claim 8 further comprising retrying, by the broadcasting unit (212), broadcasting the paging request to the set of cells via the second network interface if no response is received within a predetermined time period.
14. The method (400) as claimed in claim 8 further comprising periodically applying learnings, by the determining unit (210), based on feedback from previous paging operations, to improve the determination of the cells.
15. A user equipment (UE) (108) communicatively coupled with a network (104), the coupling comprises steps of:
receiving, by the network (104), a connection request from the UE (108);
sending, by the network (104), an acknowledgment of the connection request to the UE (108); and
transmitting a plurality of signals in response to the connection request, wherein based on the connection request an execution of paging operations is performed by a method (400) as claimed in claim 8.
| # | Name | Date |
|---|---|---|
| 1 | 202321066643-STATEMENT OF UNDERTAKING (FORM 3) [04-10-2023(online)].pdf | 2023-10-04 |
| 2 | 202321066643-PROVISIONAL SPECIFICATION [04-10-2023(online)].pdf | 2023-10-04 |
| 3 | 202321066643-POWER OF AUTHORITY [04-10-2023(online)].pdf | 2023-10-04 |
| 4 | 202321066643-FORM 1 [04-10-2023(online)].pdf | 2023-10-04 |
| 5 | 202321066643-FIGURE OF ABSTRACT [04-10-2023(online)].pdf | 2023-10-04 |
| 6 | 202321066643-DRAWINGS [04-10-2023(online)].pdf | 2023-10-04 |
| 7 | 202321066643-DECLARATION OF INVENTORSHIP (FORM 5) [04-10-2023(online)].pdf | 2023-10-04 |
| 8 | 202321066643-FORM-26 [28-11-2023(online)].pdf | 2023-11-28 |
| 9 | 202321066643-Proof of Right [06-03-2024(online)].pdf | 2024-03-06 |
| 10 | 202321066643-DRAWING [03-10-2024(online)].pdf | 2024-10-03 |
| 11 | 202321066643-COMPLETE SPECIFICATION [03-10-2024(online)].pdf | 2024-10-03 |
| 12 | 202321066643-FORM-9 [24-10-2024(online)].pdf | 2024-10-24 |
| 13 | Abstract 1.jpg | 2024-11-21 |
| 14 | 202321066643-FORM 18A [12-01-2025(online)].pdf | 2025-01-12 |
| 15 | 202321066643-Power of Attorney [23-01-2025(online)].pdf | 2025-01-23 |
| 16 | 202321066643-Form 1 (Submitted on date of filing) [23-01-2025(online)].pdf | 2025-01-23 |
| 17 | 202321066643-Covering Letter [23-01-2025(online)].pdf | 2025-01-23 |
| 18 | 202321066643-CERTIFIED COPIES TRANSMISSION TO IB [23-01-2025(online)].pdf | 2025-01-23 |
| 19 | 202321066643-FORM 3 [24-02-2025(online)].pdf | 2025-02-24 |
| 20 | 202321066643-FER.pdf | 2025-03-04 |
| 21 | 202321066643-ORIGINAL UR 6(1A) FORM 1-070425.pdf | 2025-04-28 |
| 22 | 202321066643-FER_SER_REPLY [16-06-2025(online)].pdf | 2025-06-16 |
| 23 | 202321066643-COMPLETE SPECIFICATION [16-06-2025(online)].pdf | 2025-06-16 |
| 24 | 202321066643-PatentCertificate11-07-2025.pdf | 2025-07-11 |
| 25 | 202321066643-IntimationOfGrant11-07-2025.pdf | 2025-07-11 |
| 1 | 202321066643_SearchStrategyNew_E_Search_Strategy_202321066643E_19-02-2025.pdf |