Sign In to Follow Application
View All Documents & Correspondence

System And Method To Allocate Physical Uplink Resources In A Network

Abstract: The present disclosure provides a system (108) and a method (600) for allocating physical uplink resources to one or more user equipment (UEs) (104) in a network (106). The method (600) comprising determining (602), by a radio resource management (RRM) unit (212), a number of downlink (DL) slots corresponding to an uplink (UL) slot. The method (600) comprising calculating (604), by the RRM unit (212), a number of UEs (104) scheduled for each DL slot. The method (600) comprising calculating (606), by the RRM unit (212), a number of physical uplink resources by processing the number of determined DL slots and the number of calculated UEs (104). The method (600) comprising allocating (608), by a medium access control (MAC) unit (214), the number of calculated physical uplink resources to the one or more UEs (104) in the network (106) based on a number of parameters. Figure.6

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 July 2023
Publication Number
50/2024
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. BHATNAGAR, Pradeep Kumar
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
3. KANCHARLAPALLI, N L Sairambabu
Flat #404, Pragathi Plaza Building, Next to SFS Church, Hebbagodi, Electronic City, Bengaluru - 560100, Karnataka, India.
4. RAO, Srinivasa Vundavilli
D. No. 61, 9th Cross, Kempanna Layout, Cholanayakanahalli, Bangalore - 560032, Karnataka, India.
5. KRISHNA, Kakarla Vamsi
5, Anna Nagar Post Office Quarters, Ground Floor, Anna Nagar, Chennai, Tamil Nadu - 600040, India.
6. DUTTA, Tushar
21, 3rd Main, MEG Layout, Mahadevpura, Bangalore - 560039, Karnataka, India.

Specification

FORM 2
THE PATENTS ACT, 1970 (39 of 1970) THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
APPLICANT
380006, Gujarat, India; Nationality : India
The following specification particularly describes
the invention and the manner in which
it is to be performed

RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade 5 dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully 10 reserved by the owner.
TECHNICAL FIELD
[0002] The present disclosure relates to wireless cellular communications,
and specifically to a system and a method for Physical Uplink Control Channel (PUCCH) resource allocation during Radio Resource Control (RRC) setup/RRC 15 reconfiguration procedure.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally
intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
20 [0004] The term PUCCH as used herein, refers to a Physical Uplink Control
Channel. It is a channel used in wireless communication systems for transmitting control information from a User Equipment (UE) to a base station (eNodeB in LTE or gNodeB in 5G NR).
[0005] The term HARQ as used herein, refers to a Hybrid Automatic Repeat
25 Request. The HARQ improves the reliability of data transmissions, particularly in
scenarios with high error rates or challenging channel conditions. The HARQ is
used to ensure the reliable delivery of data packets from the transmitter (e.g., base

station) to the receiver (e.g., user equipment) over the wireless channel.
[0006] The term PUCCH F0 as used herein, refers to PUCCH Format 0
(F0). It is a specific channel used for transmitting control information from the UE to the base station over the uplink channel. PUCCH F0 is primarily used for 5 transmitting HARQ feedback from the UE to the base station.
[0007] The term PUCCH F2 as used herein, refers to PUCCH Format 2
(F2). It is a specific channel used for transmitting control information from the UE to the base station over the uplink channel. PUCCH F2 is primarily used for transmitting HARQ feedback from the UE to the base station, similar to PUCCH 10 F0. However, F2 provides additional bits beyond the 1 or 2 bits typically supported by F0.
[0008] The term UCI as used herein, refers to Uplink Control Information.
The UCI refers to the control signals and feedback transmitted from the UE to the base station over the uplink channel in wireless communication systems. The UCI 15 serves various purposes, including conveying critical control information and feedback, requesting uplink resources, and enabling efficient management of the uplink transmission.
[0009] The term PRB as used herein, refers to a Physical Resource Block.
The PRBs are fundamental units of resource allocation in LTE and 5G NR wireless 20 communication systems. The PRBs represent a contiguous block of frequency and time resources used for transmitting data, control signals, or reference signals.
[0010] The term AMF as used herein, refers to an Access and Mobility
Management Function. The AMF is responsible for managing access and mobility of UEs, enforcing policies, ensuring security, and supporting various 5G features 25 and capabilities.
[0011] The term NG-RAN as used herein, refers to Next Generation Radio
Access Network. The NG-RAN is designed to work with a new 5G core network

(5GC) architecture that provides enhanced capabilities, such as network slicing, edge computing, and network automation.
[0012] The term CSI used herein, refers to Channel State Information. It
refers to the information about the characteristics of the wireless communication channel between a transmitter and a receiver. The CSI is essential for optimizing the performance of wireless communication systems, including LTE (Long-Term Evolution) and 5G networks.
[0013] The term RRM layer used herein, refers to a Radio Resource
Management layer. The RRM layer is responsible for optimizing the utilization of available radio resources, such as frequency spectrum, transmit power, and time slots, to ensure efficient and reliable communication between the UEs and the network.
[0014] The term MAC layer used herein, refers to a Medium Access Control
layer. In wireless communication systems like 5G, the MAC layer is responsible for managing access to the shared communication medium, coordinating the transmission of data between different UEs or between UEs and the network infrastructure.
BACKGROUND
[0015] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0016] Wireless communication technology has rapidly evolved over the
past few decades. The first generation of wireless communication technology was based on analog technology that offered only voice services. Further, when the

second-generation (2G) technology was introduced, text messaging and data services became possible. The 3G technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth generation (4G) technology revolutionized the wireless communication with faster data speeds, improved network coverage, and security. Currently, the fifth generation (5G) technology is being deployed, with even faster data speeds, low latency, and the ability to connect multiple devices simultaneously.
[0017] As wireless technologies are advancing, there is a need to cope with
the 5G requirements and deliver a high level of service to the subscribers. Further, optimizing resource allocation in a network is crucial in wireless communication systems to ensure efficient utilization of available spectrum and network resources while meeting the quality of service (QoS) requirements of the users. Currently, a higher number of maximum resources are allocated in the wireless network irrespective of the bandwidth requirement of a user equipment (UE). This may lead to resource wastage and unnecessary burden of allocation of more resources on the wireless network.
[0018] Thus, there is a need to implement dynamic resource allocation
algorithms that adjust resource allocation based on real-time network conditions, traffic demands, and channel quality. This allows for efficient utilization of resources and avoids over-provisioning.
[0019] Thus, the present disclosure provides a system and a method that can
mitigate the problems associated with the prior arts and that may dynamically allocate a required number of resources based on the bandwidth requirement of a user equipment (UE) in the network.
SUMMARY
[0020] In an exemplary embodiment, the present invention discloses system
for allocating physical uplink resources to one or more user equipments (UEs) in a network. The system includes a processing unit and a memory coupled to the

processing unit. The memory includes computer-implemented instructions to configure the processing unit to determine, by a radio resource management (RRM) unit, a number of downlink (DL) slots corresponding to an uplink (UL) slot, and calculate, by the RRM unit, a number of UEs scheduled for each DL slot. The processing unit further configured to calculate, by the RRM unit, a number of physical uplink resources by processing the number of determined DL slots and the number of calculated UEs and allocate, by a medium access control (MAC) unit, the number of calculated physical uplink resources to the one or more UEs in the network based on a number of parameters.
[0021] In some embodiments, the number of parameters includes at least
one or more of a channel bandwidth requirement of each UE, a traffic load on the network, a channel state information (CSI), and a number of HARQ feedback transmission bits transmitted by each UE.
[0022] In some embodiments, the allocated physical uplink resources
include at least one of a physical uplink control channel (PUCCH) format 0 (F0) resource and a PUCCH format 2 (F2) resource.
[0023] In some embodiments, the PUCCH F0 format carries at least two
HARQ feedback transmission bits and at least one scheduling request (SR) bit.
[0024] In some embodiments, the PUCCH F2 format carries at least two
uplink control information (UCI) bits.
[0025] In an exemplary embodiment, the present invention discloses a
method for allocating physical uplink resources to one or more user equipment (UEs) in a network. The method includes determining, by a radio resource management (RRM) unit, a number of downlink (DL) slots corresponding to an uplink (UL) slot. The method includes calculating, by the RRM unit, a number of UEs scheduled for each DL slot. The method includes calculating, by the RRM unit, a number of physical uplink resources by processing the number of determined DL slots and the number of calculated UEs. The method includes allocating, by a

medium access control (MAC) unit, the number of calculated physical uplink resources to the one or more UEs in the network based on a number of parameters.
[0026] In some embodiments, the number of parameters includes at least
one or more of a channel bandwidth requirement of each UE, a traffic load on the network, a channel state information (CSI), and a number of hybrid automatic repeat request (HARQ) feedback transmission bits transmitted by each UE.
[0027] In some embodiments, the allocated physical uplink resources
include at least one of a physical uplink control channel (PUCCH) format 0 (F0) resource and a PUCCH format 2 (F2) resource.
[0028] In some embodiments, the PUCCH F0 format carries at least two
HARQ feedback transmission bits and at least one scheduling request (SR) bit.
[0029] In some embodiments, the PUCCH F2 format carries at least two
uplink control information (UCI) bits.
[0030] In an exemplary embodiment, the present invention discloses a user
equipment (UE) communicatively coupled with a network. The coupling comprises steps of receiving, by the network, a connection request from the UE, sending, by the network, an acknowledgment of the connection request to the UE and transmitting a plurality of signals in response to the connection request. A physical uplink resources allocation to the UE in the network is performed by a method determining, by a radio resource management (RRM) unit, a number of downlink (DL) slots corresponding to an uplink (UL) slot. The method comprising calculating, by the RRM unit, a number of UEs scheduled for each DL slot. The method comprising calculating, by the RRM unit, a number of physical uplink resources by processing the number of determined DL slots and the number of calculated UEs. The method comprising allocating, by a medium access control (MAC) unit, the number of calculated physical uplink resources to the one or more UEs in the network based on a number of parameters.

OBJECTS OF THE PRESENT DISCLOSURE
[0031] Some of the objects of the present disclosure, which at least one
embodiment herein satisfies are as listed herein below.
[0032] It is an object of the present disclosure to facilitate PUCCH resource
allocation during RRC setup/RRC reconfiguration procedure.
[0033] It is an object of the present disclosure to accommodate and serve a
greater number of UEs in limited bandwidth deployment scenario, by giving unnecessary allocated PRBs to periodic PUCCH resources such the SR and the CSI.
[0034] It is an object of the present disclosure to calculate and allocate
PUCCH Format 0 and PUCCH Format 2 Hybrid Automatic Repeat Request (HARQ) resources based on a number of scheduled UEs per slot and a number of Downlink (DL) slots.
[0035] It is an object of the present disclosure to allocate an exact number
of PUCCH Format 0 (F0)/Format 2 (F1) resources needed by a system instead of allocation of 8 (in case of Format 2) or 32 (in case of Format 0) PUCCH resources in PUCCH resource set.
BRIEF DESCRIPTION OF THE DRAWINGS
[0036] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components, or circuitry commonly used to implement such components.

[0037] FIG. 1 illustrates an exemplary network architecture for
implementing a system for allocating physical uplink resources to one or more user equipments (UEs) in a network, in accordance with an embodiment of the present disclosure.
5 [0038] FIG. 2 illustrates an example block diagram of the system for
allocating physical uplink resources to the UEs in the network, in accordance with an embodiment of the present disclosure.
[0039] FIG. 3 illustrates an exemplary system architecture representing
implementation of a centralized unit (CU) and a distributed unit (DU), in 10 accordance with an embodiment of the disclosure.
[0040] FIG. 4 illustrates an exemplary process flow diagram representing
allocation of Physical Uplink Control Channel (PUCCH) resources to the UEs in the network, in accordance with an embodiment of the disclosure.
[0041] FIG. 5 illustrates an exemplary computer system in which or with
15 which the embodiments of the present disclosure may be implemented.
[0042] FIG. 6 illustrates another exemplary flow diagram of a method for
allocating physical uplink resources to the one or more UEs in the network, in accordance with an embodiment of the present disclosure.
[0043] The foregoing shall be more apparent from the following more
20 detailed description of the disclosure.
LIST OF REFERENCE NUMERALS
100 – Network architecture 102-1, 102-2…102-N – A plurality of users 104-1, 104-2…104-N – A plurality of computing devices 25 106 – Network 108 – System
9

200 – Block Diagram
202 – Processor(s)
204 – Memory
206 – Interface(s) 5 208 – Processing unit
210 – Database
212 – Radio resource management (RRM) unit
214 – Medium access control (MAC) unit
300 – System architecture 10 302 – Core network
304 – Radio Access Network (RAN)
306, 308 – gNodeB (gNB)
310 –Control unit (CU)
312, 314 –Distributed unit (DU) 15 400 – Flow Diagram
402 – gNB
404 – UE
406 – Access and Mobility Management Function (AMF)
500 – Computer system 20 510 – External storage device
520 – Bus
530 – Main memory
540 – Read only memory
550 – Mass storage device 25 560 – Communication port(s)
570 – Processor
600 – Flow Diagram
DETAILED DESCRIPTION
[0044] In the following description, for the purposes of explanation, various
30 specific details are set forth in order to provide a thorough understanding of
10

embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not 5 address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0045] The ensuing description provides exemplary embodiments only, and
is not intended to limit the scope, applicability, or configuration of the disclosure. 10 Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth.
15 [0046] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to
20 obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0047] Also, it is noted that individual embodiments may be described as a
process which is depicted as a flowchart, a flow diagram, a data flow diagram, a 25 structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a
11

procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0048] The word “exemplary” and/or “demonstrative” is used herein to
5 mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques 10 known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
15 [0049] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout
20 this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0050] The terminology used herein is for the purpose of describing
particular embodiments only and is not intended to be limiting of the invention. As 25 used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one
12

or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
[0051] As wireless technologies are advancing, there is a need to cope with
5 the 5G requirements and deliver a high level of service to the subscribers. Further, optimizing resource allocation in a network is crucial in wireless communication systems to ensure efficient utilization of available spectrum and network resources while meeting the quality of service (QoS) requirements of the users. Currently, a higher number of maximum resources are allocated in the wireless network 10 irrespective of the bandwidth requirement of a user equipment (UE). This may lead to resource wastage and unnecessary burden of allocation of more resources on the wireless network.
[0052] For example, the current techniques provide Physical Uplink Control
Channel (PUCCH) resource allocation in sets of 8 (in case of F2 format) or 32 (in 15 case of F0 format). Further, in lower bandwidths, Physical Resource Blocks (PRBs) are limited and may not accommodate a set of 8 PUCCH Format 2 (F2) resources, with each resource having a minimum of 2 PRBs considering lower code rates for robust transmission.
[0053] For example, when a gNB Time Division Duplex (TDD) system is
20 configured to schedule 4 UE/Transmission Time Interval (TTI) for 10 MHz bandwidth with traffic pattern configured as 7D1S2U, and when a S slot is considered as a Downlink (DL) slot with 10 symbols, and 0 Uplink (UL) symbols, then the gNB may be designed in such a way that 4 DL slots with physical downlink shared channel (PDSCH) data specific feedback may be mapped onto one UL slot 25 and 8 DL slots PDSCH data feedback is equally distributed to 2 UL slots. The UE may use the PUCCH Format 2 resources if more than 2 bits of Hybrid Automatic Repeat Request (HARQ) feedback is required to transmit.
[0054] Thus, there is a need to implement dynamic resource allocation
system that adjust resource allocation based on real-time network conditions, traffic
13

demands, and channel quality. This allows for efficient utilization of resources and avoids over-provisioning.
[0055] Further, there is a need in the art to provide a system and a method
that can mitigate the problems associated with the prior arts and that may allocate 5 a required number of resources needed by the network for optimizing the network performance.
[0056] The disclosed system and the method facilitate to accommodate
more UEs, as unnecessary allocated PRBs may be given to periodic PUCCH resources such as Scheduling Request (SR) and Channel State Information (CSI). 10 Hence, the disclosed system and method facilitates to serve a greater number of the UEs by the network.
[0057] The disclosed system and method are beneficial for lower
bandwidths, where there is a smaller number of the PUCCH PRBs after deducting the Random-Access Channel (RACH) and PUCCH common PRBs.
15 [0058] The various embodiments throughout the disclosure will be
explained in more detail with reference to FIG. 1- FIG. 6.
[0059] FIG. 1 illustrates an exemplary network architecture for
implementing a system (108) for allocating physical uplink resources to one or more user equipments (UEs) (104) in a network (106), in accordance with an 20 embodiment of the present disclosure.
[0060] As illustrated in FIG. 1, one or more computing devices (104-1, 104-
2…104-N) are connected to the system (108) through a network (106). A person of ordinary skill in the art will understand that the one or more computing devices (104-1, 104-2…104-N) are collectively referred as computing devices (104) and 25 individually referred as a computing device 104. One or more users (102-1, 102-2…102-N) provide one or more requests to the system (108). A person of ordinary skill in the art will understand that the one or more users (102-1, 102-2…102-N) may be collectively referred as users (102) and individually referred as a user (102).
14

Further, the computing devices (104) also be referred as a user equipment (UE) (104) or as UEs (104) throughout the disclosure.
[0061] In an embodiment, the computing device (104) includes, but not be
limited to, a mobile, a laptop, etc. Further, the computing device (104) includes one 5 or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, audio aid, microphone, or keyboard. Furthermore, the computing device (104) includes a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR) devices, a laptop, a general-purpose computer, a desktop, a personal digital assistant, a tablet computer, and a 10 mainframe computer. Additionally, input devices for receiving input from the user 102 such as a touchpad, touch-enabled screen, electronic pen, and the like may be used.
[0062] In an embodiment, the network (106) includes, by way of example
but not limitation, at least a portion of one or more networks having one or more
15 nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network (106) also includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private
20 network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The UE (104) may be communicatively coupled with the communication network (106). The communicative coupling comprises
25 receiving, from the UE (104), a connection request by the communication network (106), sending an acknowledgment of the connection request to the UE (104), and transmitting a plurality of signals in response to the connection request.
[0063] FIG. 2 illustrates an example block diagram of the system (108) for
30 allocating the physical uplink resources to the UEs (104) in the network (106), in
15

accordance with an embodiment of the present disclosure.
[0064] Referring to FIG. 2, in an embodiment, the system (108) includes
one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, 5 digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (108). The memory (204) may be configured to store one or more computer-10 readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as random-access memory (RAM), or non-volatile memory such as erasable programmable read only memory 15 (EPROM), flash memory, and the like.
[0065] In an embodiment, the system (108) includes an interface(s) (206).
The interface(s) (206) may comprise a variety of interfaces, for example, interfaces for data input and output devices (I/O), storage devices, and the like. The interface(s) (206) may facilitate communication through the system (108). The 20 interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, processing unit (208) and a database (210).
[0066] In an embodiment, the processing unit (208) may be implemented as
a combination of hardware and programming (for example, programmable 25 instructions) to implement one or more functionalities of the processing unit (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing unit (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing
16

unit (208) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing unit (208). In such examples, the 5 system may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource. In other examples, the processing unit (208) may be implemented by electronic circuitry. The processing unit (208) includes a radio resource
10 management (RRM) unit (212) and a medium access control (MAC) unit (214) for allocating physical uplink resources to the UEs (104) in the network (106). The RRM unit (212) is configured to determine a number of DL slots corresponding to an UL slot, calculate a number of UEs (104) scheduled for each DL slot and calculate a number of physical uplink resources by processing the number of
15 determined DL slots and the number of calculated UEs (104). The MAC unit is configured to allocate the number of calculated physical uplink resources to the one or more UEs (104) in the network (106) based on a number of parameters. In some embodiments, the number of parameters includes at least one or more of a channel bandwidth requirement of each UE, a traffic load on the network, a channel state
20 information (CSI), and a number of hybrid automatic repeat request (HARQ) feedback transmission bits transmitted by each UE.
[0067] In an aspect, the number of DL slots corresponding to the UL slots
is determined by the network’s operation mode, particularly in Time Division Duplexing (TDD) scenarios. The frame structure in 5G consists of subframes, each
25 containing UL and DL slots. These frames are organized to efficiently manage the exchange of data between base stations and user devices. Depending on the deployment and network requirements, the configuration of these subframes can vary, ranging from symmetric distributions of UL and DL slots to asymmetric setups where one direction is prioritized over the other. Further, within each frame,
30 the duration of individual slots is predefined, typically on the order of milliseconds,
17

ensuring precise timing for data transmission. Factors influencing the allocation of UL and DL slots include the anticipated traffic patterns, the nature of services being provided (e.g., real-time applications requiring low latency), and the desired Quality of Service (QoS) levels. Furthermore, the network must account for 5 overhead such as control signaling, synchronization signals, and guard intervals, which may consume additional slots and impact the overall UL/DL slot ratio.
[0068] In an aspect, calculating the number of UEs (104) scheduled for each
DL slot in the network (106) involves assessing system (108) capacity, resource allocation, and scheduling algorithms. Initially, the system’s maximum UE (104)
10 capacity within a DL slot is determined, factoring in parameters like available bandwidth and modulation techniques. Further, the resources are allocated for DL transmissions based on network configuration and traffic needs. Utilizing scheduling algorithms, UEs are assigned to DL slots considering factors like UE priority and channel conditions. To ensure equitable resource utilization, scheduled
15 UEs are evenly distributed across DL slots. The calculation entails dividing the total number of scheduled UEs by the number of DL slots available in each scheduling interval, yielding the average number of UEs per DL slot. This process is dynamic, adapting to changing network conditions and ensuring optimal performance and QoS for users.
20 [0069] In some embodiments, the allocated physical uplink resources
include at least one of PUCCH F0 resource and a PUCCH F2 resource. In some embodiments, the PUCCH F0 format carries at least two HARQ feedback transmission bits and at least one scheduling request (SR) bit. In some embodiments, the PUCCH F2 format carries at least two UCI bits.
25 [0070] In an embodiment, calculations may be proposed for the PUCCH F0
and the PUCCH F2 resource set sizes, where a maximum number of PUCCH F0 and PUCCH F2 resources required by one or more UEs in the network are calculated.
18

[0071] In an aspect, the maximum number of PUCCH F0 resources required
= (a number of downlink (DL) slots mapped onto 1 uplink (UL) slot) * (Number of UEs scheduled in 1 DL slot).
[0072] In an aspect, the maximum number of PUCCH F2 resources required
5 = (FLOOR (Number of DL slots mapped on to 1 UL slot) * (Number of UEs scheduled in 1 DL slot /3)).
[0073] In an aspect, in PUCCH F0: Each resource carries a maximum of 2-
bit HARQ and one SR bit, and in PUCCH F2: Each resource carries more than 2 Uplink Control Information (UCI) bits.
10
[0074] Although FIG. 2 shows exemplary components of the system (108),
in other embodiments, the system (108) includes fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 2. Additionally, or alternatively, one or more components of
15 the system (108) may perform functions described as being performed by one or more other components of the system (108).
[0075] FIG. 3 illustrates an architecture diagram (300) representing
implementation of a Centralized Unit (CU) and a Distributed Unit (DU), in accordance with an embodiment of the disclosure.
20 [0076] As shown in FIG. 3, a Next Generation Radio Access Network (NG-
RAN) architecture (304) consists of a set of gNodeB (gNBs) (306, 308) connected to the 5G core network (5GC) (302) through a NG interface. The NG-RAN architecture (304) is based on a cloud-native architecture that leverages virtualization and containerization technologies. The NG-RAN architecture (304)
25 is designed to support network slicing, which enables network operators to create multiple virtual networks on a single physical network infrastructure. Network slicing allows network operators to provide customized network services to different types of users and applications. The gNBs (306, 308) are responsible for radio transmission and reception, as well as radio resource management. The gNBs
19

(306, 308) are designed to be highly flexible and scalable. It supports different frequency bands, multiple antenna technologies, and different deployment scenarios. The gNBs (306, 308) supports both centralized and distributed deployment scenarios. In a centralized deployment, the CU handles the radio 5 resource management. In a distributed deployment, the DU handles the radio resource management. In an aspect, the gNB (308) may consist of a gNB-CU (310) and one or more gNB-DU(s) (312, 314). The gNB-CU (310) and the gNB-DU(s) (312, 314) are connected via a F1 interface. The gNBs (306, 308) are interconnected through a Xn-C interface. In an aspect, the NG interface, the Xn-C interface and the 10 F1 interface are logical interfaces that enables the communication and coordination between different network functions and components to provide efficient and reliable connectivity services.
[0077] In an embodiment, the gNB-DU(s) (312, 314) has multiple layers
such as a Radio Resource Management (RRM) layer, General Packet Radio Service
15 (GPRS) Tunnelling Protocol User Plane (E-GTPU) layer, New Radio User Plane (NR-UP) layer, Radio Link Control (RLC) layer, Media Access Control (MAC) layer, and Physical (PHY) layer. The RRM layer may be considered as heart of the system because of the DL and the UL bandwidth part creations i.e., Physical Downlink Control Channel (PDCCH) dimensioning, PUCCH dimensioning,
20 Sounding Reference Signal (SRS), etc. are critical parts of the system. The DL data may be either signalling data or user data received from gNB-CU (310). The Enhanced GPRS Tunneling Protocol User Plane (E_GTPU) layer may take care of forwarding user data to the RLC, and F1 Application Protocol (F1AP) protocol may handle signaling messages from gNB-CU (310) and forward it to the RLC layer.
25 The RLC layer may be mainly responsible for sending Buffer Occupancy (BO) information to the MAC layer to get grants for each logical channel to transmit the RLC Service Data Unit (SDU) received from the upper layers. The MAC layer may provide a grant to the RLC layer based on calculated Transport Block (TB) size based on channel conditions and does multiplexing of different logical channel data
30 received from the RLC for the same UE, and prepare a TB, and place it in slot
20

buffers to schedule in that slot so that the physical (PHY) layer may schedule over the air. Similarly, in the UL direction, the MAC layer does de-multiplexing and forwards data to the RLC layer corresponding to each logical channel. Further, the RLC layer may forward the same to upper layers.
5 [0078] FIG. 4 illustrates an example process flow diagram (400)
representing allocation of PUCCH resources to the UEs (404) in the network (106), in accordance with an embodiment of the disclosure.
[0079] In an embodiment, during a cell bring up for activating and
configuring the cell within the network (106), the RRM unit (212) performs the 10 calculation of PUCCH resources for the SR, the CSI, the HARQ F0 resource set and the HARQ F2 resource set.
[0080] At step 408: The UE (404) may transmit the RACH indication
message (Msg1) that refers to the signaling provided by the UE (404) to a gNB (402) to initiate the random-access procedure.
15 [0081] At step 410: The gNB (402) may transmit a Random-Access
Response (RAR) message (Msg2) to the UE (404) in response to the Msg1.
[0082] At step 412: The UE (404) may transmit a RRC setup request (Msg3)
to the gNB (402). In an embodiment, the RRM unit (212) allocates the UE (404) a set of PUCCH F0 and the PUCCH F2 resource based on the information made ready
20 during the cell bring up. The information includes a number of DL slots corresponding to an UL slot, and a number of UEs (104) scheduled for each DL slot. In an embodiment, calculations may be proposed for the PUCCH F0 and the PUCCH F2 resource set sizes, where a maximum number of PUCCH F0 and PUCCH F2 resources required by one or more UEs in the network are calculated.
25 In an aspect, the maximum number of PUCCH F0 resources required = (a number of downlink (DL) slots mapped onto 1 uplink (UL) slot) * (Number of UEs scheduled in 1 DL slot). In an aspect, the maximum number of PUCCH F2 resources required = (FLOOR (Number of DL slots mapped on to 1 UL slot) *
21

(Number of UEs scheduled in 1 DL slot/3)). In an aspect, in PUCCH F0: Each resource carries a maximum of 2-bit HARQ and one SR bit, and in PUCCH F2: Each resource carries more than 2 UCI bits. The MAC unit is configured to allocate the number of calculated physical uplink resources to the one or more UEs (104) in 5 the network (106) based on a number of parameters. In some embodiments, the number of parameters includes at least one or more of a channel bandwidth requirement of each UE, a traffic load on the network, a channel state information (CSI), and a number of hybrid automatic repeat request (HARQ) feedback transmission bits transmitted by each UE.
10 [0083] At step 414: The gNB (402) may transmit a RRC setup response
message to the UE (404) in response to the Msg3.
[0084] At step 416: The UE (404) may transmit a RRC setup complete
message and a Non-Access Stratum (NAS) message to the gNB (402). In an aspect, the NAS messages are the signaling messages that are exchanged over the NAS 15 layer, which is responsible for handling functions such as mobility management, session management, and subscriber authentication.
[0085] At step 418: The gNB (402) may initiate the various initial UE
messages (such as signaling messages related to the initial connection setup and registration process) and may transmits a registration request to an Access and 20 mobility function (AMF) (406). In an embodiment, the AMF (406) may perform the authentication and NAS security procedures for the UE (404).
[0086] At step 420: The gNB (402) may initiate a context setup request and
the registration accept request to the AMF (406). Upon receiving an initial attach or registration request from a UE (404), the gNB (402) creates a context setup 25 request message. This message includes information about the UE (404), such as its identity (e.g., International Mobile Subscriber Identity (IMSI) or temporary identifier), capabilities, and other relevant parameters. The gNB (402) sends this context setup request message to the AMF (406), requesting the establishment of a context for the UE (404). After receiving the context setup request from the gNB
22

(402), the AMF (406) processes the request and verifies the UE’s identity and eligibility for network access. If the UE is authenticated and authorized successfully, the AMF generates a registration accept request message. This message acknowledges the context setup request and confirms the UE’s registration 5 on the network (106). The registration accept request may include configuration parameters and session-related information for the UE (404).
[0087] At step 422: The gNB (402) may transmit a UE capability enquiry
request to the UE (404). The UE capability enquiry request is a signaling message aimed at obtaining information about the capabilities of the UE (404).
10 [0088] At step 424: The UE (404) may transmit the UE capability
information to the gNB (402).
[0089] At step 426: The gNB (402) may transmit an Access Stratum (AS)
security mode command to the UE to initiate an establishment of security parameters between the UE (404) and the network (106).
15 [0090] At step 428: The UE (404) may transmit the AS security mode
complete command to the gNB (402). In an embodiment, a dedicated CSI resource information is filled in a RRC reconfiguration request along with dedicated SR resource and PUCCH F0 and PUCCH F2 resources. The gNB (402) generates the RRC reconfiguration request message, specifying the updated configuration
20 parameters that need to be applied to the UE (404). These parameters could include changes to radio resources, mobility management settings, measurement configurations, or other network-related parameters.
[0091] At step 430: The gNB (402) may transmit the RRC reconfiguration
request to the UE (404) to modify the radio connection parameters or configurations 25 of the UE (404) to optimize network performance and support various services and applications efficiently.
23

[0092] At step 432: The UE (404) may transmit the RRC reconfiguration
complete message to the gNB (402). the RRC reconfiguration complete message is an acknowledgment sent by the UE (404) to the gNB (402) after successfully processing the RRC reconfiguration request.
5 [0093] FIG. 5 illustrates an example computer system (500) in which or
with which the embodiments of the present disclosure may be implemented.
[0094] As shown in FIG. 5, the computer system (500) may include an
external storage device (510), a bus (520), a main memory (530), a read-only memory (540), a mass storage device (550), a communication port(s) (560), and a
10 processor (570). A person skilled in the art will appreciate that the computer system (500) may include more than one processor and communication ports. The processor (570) may include various modules associated with embodiments of the present disclosure. The communication port(s) (560) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit
15 or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication ports(s) (560) may be chosen depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (500) connects.
[0095] In an embodiment, the main memory (530) may be Random Access
20 Memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (540) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (570). The mass storage device (550) may be any current or future mass 25 storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
24

[0096] In an embodiment, the bus (520) may communicatively couple the
processor(s) (570) with the other memory, storage, and communication blocks. The bus (520) may be, e.g., a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system (500).
[0097] In another embodiment, operator, and administrative interfaces, e.g.,
a display, keyboard, and cursor control device may also be coupled to the bus (520) to support direct operator interaction with the computer system (500). Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) (560). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system (500) limit the scope of the present disclosure.
[0098] FIG. 6 illustrates another exemplary flow diagram for a method
(600) for allocating physical uplink resources to the UEs (104) in a network (106), in accordance with an embodiment of the present disclosure.
[0099] At 602: The method (600) comprising determining, by a RRM unit
(212), a number of DL slots corresponding to an UL slot. In an aspect, the number of DL slots corresponding to the UL slots is determined by the network’s operation mode, particularly in Time Division Duplexing (TDD) scenarios. The frame structure in 5G consists of subframes, each containing UL and DL slots. These frames are organized to efficiently manage the exchange of data between base stations and user devices. Depending on the deployment and network requirements, the configuration of these subframes can vary, ranging from symmetric distributions of UL and DL slots to asymmetric setups where one direction is prioritized over the other. Further, within each frame, the duration of individual slots is predefined, typically on the order of milliseconds, ensuring precise timing

for data transmission. Factors influencing the allocation of UL and DL slots include the anticipated traffic patterns, the nature of services being provided (e.g., real-time applications requiring low latency), and the desired Quality of Service (QoS) levels. Furthermore, the network must account for overhead such as control signaling, synchronization signals, and guard intervals, which may consume additional slots and impact the overall UL/DL slot ratio.
[00100] At 604: The method (600) comprising calculating (604), by the RRM
unit (212), a number of UEs (104) scheduled for each DL slot. In an aspect, calculating the number of UEs (104) scheduled for each DL slot in the network (106) involves assessing system (108) capacity, resource allocation, and scheduling algorithms. Initially, the system’s maximum UE (104) capacity within a DL slot is determined, factoring in parameters like available bandwidth and modulation techniques. Further, the resources are allocated for DL transmissions based on network configuration and traffic needs. Utilizing scheduling algorithms, UEs are assigned to DL slots considering factors like UE priority and channel conditions. To ensure equitable resource utilization, scheduled UEs are evenly distributed across DL slots. The calculation entails dividing the total number of scheduled UEs by the number of DL slots available in each scheduling interval, yielding the average number of UEs per DL slot. This process is dynamic, adapting to changing network conditions and ensuring optimal performance and QoS for users.
[00101] At 606: The method (600) comprising calculating (606), by the RRM
unit (212), a number of physical uplink resources by processing the number of determined DL slots and the number of calculated UEs (104).
[00102] At 606: The method (600) comprising allocating (608), by the MAC
unit (214), the number of calculated physical uplink resources to the one or more UEs (104) in the network (106) based on a number of parameters. In some embodiments, the number of parameters includes at least one or more of a channel bandwidth requirement of each UE, a traffic load on the network, a CSI, and a number of HARQ feedback transmission bits transmitted by each UE (104).

[00103] In some embodiments, the allocated physical uplink resources
include at least one of a physical uplink control channel (PUCCH) format 0 (F0) resource and a PUCCH format 2 (F2) resource. The PUCCH F0 format carries at least two HARQ feedback transmission bits and at least one scheduling request (SR) bit.
[00104] In some embodiments, the PUCCH F2 format carries at least two
uplink control information (UCI) bits.
[00105] In an exemplary embodiment, the present invention discloses a UE
(104) communicatively coupled with a network (106). The coupling comprises steps of receiving, by the network (106), a connection request from the UE (104), sending, by the network, an acknowledgment of the connection request to the UE (104) and transmitting a plurality of signals in response to the connection request. A physical uplink resources allocation to the UE (104) in the network (106) is performed by the method (600) comprising determining (602), by the RRM unit (212), a number of DL slots corresponding to an UL slot. The method (600) comprising calculating (604), by the RRM unit (212), a number of UEs (104) scheduled for each DL slot. The method (600) comprising calculating (606), by the RRM unit (212), a number of physical uplink resources by processing the number of determined DL slots and the number of calculated UEs (104). The method (600) comprising allocating (608), by the MAC unit (214), the number of calculated physical uplink resources to the UEs (104) in the network (106) based on a number of parameters.
[00106] The present disclosure relates to a system and method for optimizing
the allocation of Physical Uplink Control Channel (PUCCH) resources in a wireless communication network, particularly for scenarios with lower bandwidths where the number of PUCCH Physical Resource Blocks (PRBs) is limited. The system is designed to calculate and allocate the exact number of PUCCH Format 0 (F0) and PUCCH Format 2 (F2) resources needed based on the number of scheduled User Equipments (UEs) per slot and the number of Downlink (DL) slots with Hybrid

Automatic Repeat Request (HARQ) feedback mapped onto one Uplink (UL) slot where HARQ feedback is received.
[00107] The advantages of the disclosure include more efficient use of
PUCCH resources, especially in lower bandwidth scenarios, and the ability to serve a higher number of UEs without unnecessary allocation of PRBs, which can now be used for periodic PUCCH resources such as Scheduling Requests (SR) and Channel State Information (CSI).
[00108] The components and methods described herein are exemplary and
should not be construed as limiting the scope of the disclosure. The disclosure may be implemented with various modifications and adaptations that will be apparent to those skilled in the art, without departing from the spirit and scope of the claims that follow.
[00109] While the foregoing describes various embodiments of the
invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions, or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE PRESENT DISCLOSURE
[00110] The present disclosure facilitates PUCCH resource allocation during
RRC setup/RRC reconfiguration procedure.
[00111] The present disclosure facilitates to accommodate and serve a
greater number of UEs in limited bandwidth deployment scenario, by giving unnecessary allocated PRBs to periodic PUCCH resources such as SR and CSI.
[00112] The present disclosure facilitates to calculate and allocate PUCCH
Format 0 and PUCCH Format 2 resources based on a number of scheduled UE per

slot and a number of DL slots with HARQ feedback mapped onto 1 UL slot where HARQ feedback is received.
[00113] The present disclosure facilitates to allocate an exact number of
PUCCH Format 0/Format 2 resources needed by system instead of allocation of 8 (in case of Format 2) or 32 (in case of Format 0) PUCCH resources in the PUCCH resource set.

We claim:
1. A system (108) for allocating physical uplink resources to one or more user
equipments (UEs) (104) in a network (106), the system (108) comprising:
a processing unit;
a memory coupled to the processing unit, wherein the memory includes computer-implemented instructions to configure the processing unit to:
determine, by a radio resource management (RRM) unit (212), a number of downlink (DL) slots corresponding to an uplink (UL) slot;
calculate, by the RRM unit (212), a number of UEs (104) scheduled for each DL slot;
calculate, by the RRM unit (212), a number of physical uplink resources by processing the number of determined DL slots and the number of calculated UEs (104); and
allocate, by a medium access control (MAC) unit (214), the number of calculated physical uplink resources to the one or more UEs (104) in the network (106) based on a number of parameters.
2. The system (108) as claimed in claim 1, wherein the number of parameters includes at least one or more of a channel bandwidth requirement of each UE (104), a traffic load on the network (106), a channel state information (CSI), and a number of hybrid automatic repeat request (HARQ) feedback transmission bits transmitted by each UE (104).
3. The system (108) as claimed in claim 1, wherein the allocated physical uplink resources include at least one of a physical uplink control channel (PUCCH) format 0 (F0) resource and a PUCCH format 2 (F2) resource.

4. The system (108) as claimed in claim 3, wherein the PUCCH F0 format carries at least two HARQ feedback transmission bits and at least one scheduling request (SR) bit.
5. The system (108) as claimed in claim 3, wherein the PUCCH F2 format carries at least two uplink control information (UCI) bits.
6. A method (600) for allocating physical uplink resources to one or more user equipment (UEs) (104) in a network (106), the method (600) comprising:
determining (602), by a radio resource management (RRM) unit (212), a number of downlink (DL) slots corresponding to an uplink (UL) slot;
calculating (604), by the RRM unit (212), a number of UEs (104) scheduled for each DL slot;
calculating (606), by the RRM unit (212), a number of physical uplink resources by processing the number of determined DL slots and the number of calculated UEs (104); and
allocating (608), by a medium access control (MAC) unit (214), the number of calculated physical uplink resources to the one or more UEs (104) in the network (106) based on a number of parameters.
7. The method (600) as claimed in claim 6, wherein the number of parameters includes at least one or more of a channel bandwidth requirement of each UE (104), a traffic load on the network (106), a channel state information (CSI), and a number of hybrid automatic repeat request (HARQ) feedback transmission bits transmitted by each UE (104).
8. The method (600) as claimed in claim 6, wherein the allocated physical uplink resources include at least one of a physical uplink control channel (PUCCH) format 0 (F0) resource and a PUCCH format 2 (F2) resource.

9. The method (600) as claimed in claim 8, wherein the PUCCH F0 format carries
at least two HARQ feedback transmission bits and at least one scheduling request
(SR) bit.
10. The method (600) as claimed in claim 8, wherein the PUCCH F2 format carries at least two uplink control information (UCI) bits.
11. A user equipment (UE) (104) communicatively coupled with a network (106), the coupling comprises steps of:
receiving, by the network (106), a connection request from the UE (104);
sending, by the network (106), an acknowledgment of the connection request to the UE (104); and
transmitting a plurality of signals in response to the connection request, wherein physical uplink resources allocation to the UE (104) in the network (106) is performed by a method (600) as claimed in claim 6.

Documents

Application Documents

# Name Date
1 202321049087-STATEMENT OF UNDERTAKING (FORM 3) [20-07-2023(online)].pdf 2023-07-20
2 202321049087-PROVISIONAL SPECIFICATION [20-07-2023(online)].pdf 2023-07-20
3 202321049087-FORM 1 [20-07-2023(online)].pdf 2023-07-20
4 202321049087-DRAWINGS [20-07-2023(online)].pdf 2023-07-20
5 202321049087-DECLARATION OF INVENTORSHIP (FORM 5) [20-07-2023(online)].pdf 2023-07-20
6 202321049087-FORM-26 [17-10-2023(online)].pdf 2023-10-17
7 202321049087-FORM-26 [10-04-2024(online)].pdf 2024-04-10
8 202321049087-FORM 13 [10-04-2024(online)].pdf 2024-04-10
9 202321049087-AMENDED DOCUMENTS [10-04-2024(online)].pdf 2024-04-10
10 202321049087-Request Letter-Correspondence [03-06-2024(online)].pdf 2024-06-03
11 202321049087-Power of Attorney [03-06-2024(online)].pdf 2024-06-03
12 202321049087-Covering Letter [03-06-2024(online)].pdf 2024-06-03
13 202321049087-CORRESPONDANCE-WIPO CERTIFICATE-11-06-2024.pdf 2024-06-11
14 202321049087-FORM-5 [16-07-2024(online)].pdf 2024-07-16
15 202321049087-DRAWING [16-07-2024(online)].pdf 2024-07-16
16 202321049087-CORRESPONDENCE-OTHERS [16-07-2024(online)].pdf 2024-07-16
17 202321049087-COMPLETE SPECIFICATION [16-07-2024(online)].pdf 2024-07-16
18 202321049087-ORIGINAL UR 6(1A) FORM 26-190724.pdf 2024-07-24
19 Abstract-1.jpg 2024-09-04
20 202321049087-FORM-9 [21-10-2024(online)].pdf 2024-10-21
21 202321049087-FORM 18A [22-10-2024(online)].pdf 2024-10-22
22 202321049087-FORM 3 [04-11-2024(online)].pdf 2024-11-04
23 202321049087-FER.pdf 2025-01-23
24 202321049087-FORM 3 [29-01-2025(online)].pdf 2025-01-29
25 202321049087-FORM 3 [29-01-2025(online)]-1.pdf 2025-01-29
26 202321049087-Proof of Right [13-02-2025(online)].pdf 2025-02-13
27 202321049087-ORIGINAL UR 6(1A) FORM 1-210225.pdf 2025-02-24
28 202321049087-FER_SER_REPLY [11-03-2025(online)].pdf 2025-03-11

Search Strategy

1 202321049087_SearchStrategyNew_E_SEARCHE_23-01-2025.pdf