Abstract: The present disclosure relates to a method and a system for allocation of one or more network resources in a telecommunication network. In one example, the method comprises transmitting, by a first processor [302], an event request to a second processor [404], and then receiving an acknowledgement of the event request from the second processor [404]. The acknowledgement comprises a resource availability information for available resources. The method further comprises selecting, by the first processor [302], at least one resource from the one or more available resources based on an analysis of the resource availability information, and then transmitting a resource reservation request for the selected at least one resource to the second processor [404]. The method further comprises receiving, by the first processor [302], a reservation response for the selected at least one resource from the second processor [404]. [FIG. 5]
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHODS AND SYSTEMS FOR ALLOCATION OF ONE OR
MORE NETWORK RESOURCES IN A
TELECOMMUNICATION NETWORK”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHODS AND SYSTEMS FOR ALLOCATION OF ONE OR MORE
NETWORK RESOURCES IN A TELECOMMUNICATION NETWORK
FIELD OF INVENTION
5
[0001] Embodiments of the present disclosure generally relate to network resource
allocation. More particularly, embodiments of the present disclosure relate to
methods and systems for allocation of one or more network resources in a
telecommunication network.
10
BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
15 include certain aspects of the art that may be related to various features of the
present disclosure. However, it should be appreciated that this section is used only
to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of the prior art.
20 [0003] Wireless communication technology has rapidly evolved over the past few
decades, with each generation bringing significant improvements and
advancements. The first-generation of wireless communication technology was
based on analog technology and offered only voice services. However, with the
advent of the second-generation (2G) technology, digital communication and data
25 services became possible, and text messaging was introduced. The third-generation
(3G) technology marked the introduction of high-speed internet access, mobile
video calling, and location-based services. The fourth-generation (4G) technology
revolutionized wireless communication with faster data speeds, better network
coverage, and improved security. Currently, the fifth-generation (5G) technology is
30 being deployed, promising even faster data speeds, low latency, and the ability to
connect multiple devices simultaneously. With each generation, wireless
3
communication technology has become more advanced, sophisticated, and capable
of delivering more services to its users.
[0004] In recent years, the telecommunications industry has undergone a
5 significant transformation with the advent of Network Functions Virtualization
(NFV) and Software-Defined Networking (SDN). These technologies have
revolutionized the way network services are provisioned, managed, and
orchestrated, allowing for greater flexibility, agility, and efficiency in network
operations.
10
[0005] In this evolving landscape, an NFV SDN platform acts as a critical
infrastructure component, providing the foundation for deploying and managing
virtualized network functions (VNF), VNF components (VNFC), cloud network
functions (CNF), and cloud network function components (CNFC). However, as
15 the complexity and scale of virtualized networks have grown, new challenges have
emerged in resource management, network service orchestration, and ensuring the
dynamic allocation of resources to meet changing service demands.
[0006] In a virtualized network environment supporting VNF, VNFC, CNF, and
20 CNFC, it is essential to efficiently allocate resources to meet the instantiation,
scaling, and healing requirements of these network functions.
[0007] Further, over the period of time various solutions have been developed to
address resource allocation and resource reservation tasks. However, there are
25 certain challenges with the existing solutions. For example, the existing solutions
do not provide accurate information about the available resources. In addition, the
existing solutions do not enable resource allocation in advance.
[0008] Thus, there exists an imperative need in the art to provide a method and
30 system to address the challenges associated with resource management, which the
present disclosure aims to address.
4
SUMMARY
[0009] This section is provided to introduce certain aspects of the present disclosure
5 in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features or the scope of the claimed
subject matter.
[0010] An aspect of the present disclosure may relate to a method for allocation of
10 one or more network resources in a telecommunication network. The method
comprises transmitting, by the first processor, an event request for a resource
allocation to a second processor. The method further comprises receiving, by the
first processor, an acknowledgement of the event request, wherein the
acknowledgement comprises at least a resource availability information for one or
15 more available resources from the second processor. The method further comprises
selecting, by the first processor, at least one resource from the one or more available
resources based on an analysis of the resource availability information. The method
further comprises transmitting, by the first processor, a resource reservation request
for the selected at least one resource to the second processor. The method further
20 comprises receiving, by the first processor, a reservation response for the selected
at least one resource from the second processor.
[0011] In another exemplary aspect of the present disclosure, the method further
comprises selecting, by the first processor, a set of hosts or servers from an available
25 host information received in the acknowledgement.
[0012] In another exemplary aspect of the present disclosure, the event request
associated with resource requirements is transmitted during an instantiation of a
network function.
30
5
[0013] In another exemplary aspect of the present disclosure, the first processor
resides at a policy execution engine (PEEGN).
[0014] In another exemplary aspect of the present disclosure, the second processor
5 resides at a physical and virtual inventory management (PVIM).
[0015] In another exemplary aspect of the present disclosure, the resource
availability information comprises information of the one or more available
resources for scaling and the instantiation of the network function.
10
[0016] In another exemplary aspect of the present disclosure, the reservation
response is at least one of a positive reservation response and a negative reservation
response. The positive reservation response is received by the first processor in an
event the selected at least one resource from the one or more available resources is
15 successfully reserved. Further, the negative reservation response is received by the
first processor in an event the selected at least one resource from the one or more
available resources is unsuccessfully reserved.
[0017] Another aspect of the present disclosure may relate to a system for
20 allocation of one or more network resources in the telecommunication network. The
system includes a first processor. The first processor is configured to transmit an
event request for a resource allocation to a second processor. The first processor is
further configured to receive an acknowledgement of the event request from the
second processor. The acknowledgement comprises at least a resource availability
25 information for one or more available resources. The first processor is further
configured to select at least one resource from the one or more available resources
based on an analysis of the resource availability information. The first processor is
further configured to transmit a resource reservation request for the selected at least
one resource to the second processor. The first processor is further configured to
30 receive a reservation response for the selected at least one resource from the second
processor.
6
[0018] Yet another aspect of the present disclosure may relate to a non-transitory
computer readable storage medium storing instructions for allocation of one or
more network resources in the telecommunication network. The instructions
5 include executable code which, when executed by one or more units of a system,
causes a first processor of the system to transmit an event request for a resource
allocation to a second processor. Further, the instructions include executable code
which, when executed, causes the first processor to receive an acknowledgement of
the event request from the second processor. The acknowledgement comprises at
10 least a resource availability information for one or more available resources.
Further, the instructions include executable code which, when executed, causes the
first processor to select at least one resource from the one or more available
resources based on an analysis of the resource availability information. Further, the
instructions include executable code which, when executed, causes the first
15 processor to transmit a resource reservation request for the selected at least one
resource to the second processor. Further, the instructions include executable code
which, when executed, causes the first processor to receive a reservation response
for the selected at least one resource from the second processor.
20 OBJECTS OF THE DISCLOSURE
[0019] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
25 [0020] It is an object of the present disclosure to provide a system and a method for
allocation of one or more network resources in a telecommunication network.
[0021] It is an object of the present disclosure to provide a system and a method to
provide accurate resource information for instantiation or scaling purposes.
30
7
[0022] It is another object of the present disclosure to provide a system and a
method to reserve the available resources for instantiation or scaling purposes.
[0023] It is another object of the present disclosure to provide a solution which
5 moves unreserved and allocated resources to a free pool of resources thereby freeing
up the resources from allocated resources.
BRIEF DESCRIPTION OF THE DRAWINGS
10 [0024] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
15 disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
20 to implement such components.
[0025] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture/platform, in accordance with
exemplary implementation of the present disclosure.
25
[0026] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented in accordance with
exemplary implementation of the present disclosure.
8
[0027] FIG. 3 illustrates an exemplary block diagram of a system for allocation of
one or more network resources in a telecommunication network, in accordance with
exemplary implementations of the present disclosure.
5 [0028] FIG. 4 illustrates an exemplary environment for implementing the present
disclosure is shown, in accordance with exemplary implementations of the present
disclosure.
[0029] FIG. 5 illustrates a method flow diagram for allocation of the one or more
10 network resources in the telecommunication network, in accordance with
exemplary implementations of the present disclosure.
[0030] FIG. 6 illustrates another exemplary method flow diagram for allocation of
the one or more network resources in the telecommunication network, in
15 accordance with exemplary implementations of the present disclosure.
[0031] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
20 DETAILED DESCRIPTION
[0032] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
25 embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
problems discussed above.
30
9
[0033] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
5 It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
[0034] Specific details are given in the following description to provide a thorough
10 understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
15
[0035] It should be noted that the terms "first", "second", "primary", "secondary",
"target" and the like, herein do not denote any order, ranking, quantity, or
importance, but rather are used to distinguish one element from another.
20 [0036] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
25 is terminated when its operations are completed but could have additional steps not
included in a figure.
[0037] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
30 subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
10
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
5 description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
[0038] As used herein, a “processing unit” or “processor” or “operating processor”
10 includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a Digital
Signal Processing (DSP) core, a controller, a microcontroller, Application Specific
15 Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
20
[0039] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
25 or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
30 contain at least one input means configured to receive an input from unit(s) which
are required to implement the features of the present disclosure.
11
[0040] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
5 medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
functions.
10
[0041] As used herein “interface” or “user interface refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
communication or interaction of one or more modules or one or more units with
15 each other, which also includes the methods, functions, or procedures that may be
called.
[0042] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
20 general-purpose processor, a special purpose processor, a conventional processor, a
digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
25
[0043] As used herein the transceiver unit include at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
information or a combination thereof between units/components within the system
and/or connected with the system.
30
12
[0044] As discussed in the background section, the current known solutions have
several shortcomings. The present disclosure aims to overcome the abovementioned and other existing problems in this field of technology by providing a
method and a system of managing allocation of network resources in a
5 telecommunication network.
[0045] FIG. 1 illustrates an exemplary block diagram representation of. The
MANO architecture [100] may be developed for managing telecom cloud
infrastructure automatically, managing design or deployment design, managing
10 instantiation of a network node(s) etc/service(s). The MANO architecture [100]
deploys the network node(s) in the form of Virtual Network Function (VNF) and
Cloud-native/ Container Network Function (CNF). The system as provided by the
present disclosure may comprise one or more components of the MANO
architecture [100]. The MANO architecture [100] may be used to automatically
15 instantiate the VNFs into the corresponding environment of the present disclosure
so that it could help in onboarding other vendor(s) CNFs and VNFs to the platform.
In an implementation, the system may comprise a NFV Platform Decision Analytics
(NPDA) [1096] component.
20 [0046] As shown in FIG. 1, the MANO architecture [100] comprises a user
interface layer [102], a network function virtualization (NFV) and software defined
network (SDN) design function module [104], a platform foundation services
module [106], a platform core services module [108] and a platform resource
adapters and utilities module [112] All the components may be assumed to be
25 connected to each other in a manner as obvious to the person skilled in the art for
implementing features of the present disclosure.
[0047] The NFV and SDN design function module [104] comprises a network
manager [1042], a VNF catalogue [1044], a network services catalogue [1046], a
30 network slicing and service chaining manager [1048], a physical and virtual
resource manager [1050] and a CNF lifecycle manager [1052]. The network
13
manager [1042] may be responsible for deciding on which server of the
communication network the microservice may be instantiated. The network
manager [1042] may manage the overall flow of incoming/ outgoing requests
during interaction with the user. The network manager may have a VNF lifecycle
5 manager and the CNF lifecycle manager in case the network is working utilising
the VNF and CNF. The network manager [1042] may be responsible for
determining which sequence to be followed for executing the process. For e.g. in
an AMF network function of the communication network (such as a 5G network),
sequence for execution of processes P1 and P2 etc. The VNF catalogue [1044]
10 stores the metadata of all the VNFs (also CNFs in some cases). The network
services catalogue [1046] stores the information of the services that need to be run.
The network slicing and service chaining manager [1048] manages the slicing
(an ordered and connected sequence of network service/ network functions (NFs))
that must be applied to a specific networked data packet. The physical and virtual
15 resource manager [1050] stores the logical and physical inventory of the VNFs.
Just like the network manager [1042], the CNF lifecycle manager [1052] may be
similarly used for the CNFs lifecycle management.
[0048] The platforms foundation services module [106] comprises a
20 microservices elastic load balancer [1062], an identity & access manager [1064], a
command line interface (CLI) [1066], a central logging manager [1068], and an
event routing manager [1070]. The microservices elastic load balancer [1062]
may be used for maintaining the load balancing of the request for the services. The
identity & access manager [1064] may be used for logging purposes. The
25 command line interface (CLI) [1066] may be used to provide commands to
execute certain processes which requires changes during the run time. The central
logging manager [1068] may be responsible for keeping the logs of every service.
These logs are generated by the MANO platform [100]. These logs may be used for
debugging purposes. The event routing manager [1070] may be responsible for
30 routing the events i.e., the application programming interface (API) hits to the
corresponding services.
14
[0049] The platforms core services module [108] comprises NFV infrastructure
monitoring manager [1082], an assure manager [1084], a performance manager
[1086], a policy execution engine [1088], a capacity monitoring manager [1090], a
5 release management (mgmt.) repository [1092], a configuration manager & golden
configuration template (GCT) [1094], an NFV platform decision analytics [1096],
a platform NoSQL DB [1098], a platform schedulers and cron jobs [1100], a VNF
backup & upgrade manager [1102], a micro service auditor [1104], and a platform
operations, administration and maintenance manager [1106]. The NFV
10 infrastructure monitoring manager [1082] may monitor the infrastructure part of
the NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure
manager [1084] may be responsible for supervising the alarms the vendor may be
generating. The performance manager [1086] may be responsible for managing
the performance counters. The policy execution engine (PEGN) [1088] may be
15 responsible for managing all the policies. The capacity monitoring manager
(CMM) [1090] may be responsible for sending the request to the PEGN [1088].
The release management repository (RMR) [1092] may be responsible for
managing the releases and the images of all of the vendor’s network nodes. The
configuration manager & GCT [1094] manages the configuration and GCT of all
20 the vendors. The NFV platform decision analytics (NPDA) [1096] helps in
deciding the priority of using the network resources. It is further noted that the
policy execution engine (PEGN) [1088], the configuration manager & (GCT)
[1094] and the (NPDA) [1096] work together. The platform NoSQL DB [1098]
may be a platform database for storing all the inventory (both physical and logical)
25 as well as the metadata of the VNFs and CNF. It may be noted that the platform
NoSQL DB [1098] may be just a narrower implementation of the present disclosure,
and any other kind of structure for the database may be implemented for the
platform database such as relational or non-relational database. The platform
schedulers and cron jobs [1100] may schedule the task such as but not limited to
30 triggering of an event, traverse the network graph etc. The VNF backup & upgrade
manager [1102] takes backup of the images, binaries of the VNFs and the CNFs
15
and produces those backups on demand in case of server failure. The microservice
auditor [1104] audits the microservices. For e.g., in a hypothetical case, instances
not being instantiated by the MANO architecture [100] may be using the network
resources. In such case, the microservice auditor [1104] audits and informs the
5 same so that resources can be released for services running in the MANO
architecture [100]. The audit assures that the services only run on the MANO
platform [100]. The platform operations, administration and maintenance
manager [1106] may be used for newer instances that are spawning.
10 [0050] The platform resource adapters and utilities module [112] further
comprises a platform external API adaptor and gateway [1122], a generic decoder
and indexer (XML, CSV, JSON) [1124], a docker service adaptor [1126], an API
adapter [1128], and a NFV gateway [1130]. The platform external API adaptor
and gateway [1122] may be responsible for handling the external services (to the
15 MANO platform [100]) that requires the network resources. The generic decoder
and indexer (XML, CSV, JSON) [1124] may get directly the data of the vendor
system in the XML, CSV, JSON format. The docker service adaptor [1126] may
be the interface provided between the telecom cloud and the MANO architecture
[100] for communication. The API adapter [1128] may be used to connect with the
20 virtual machines (VMs). The NFV gateway [1130] may be responsible for
providing the path to each services going to/incoming from the MANO architecture
[100].
[0051] The Docker Service Adapter (DSA) [1126] may be a microservices-based
25 component that may be designed to deploy and manage Container Network
Functions (CNFs) and their components (CNFCs) across Docker nodes. The DSA
[1126] may offer REST endpoints for key operations, such as uploading container
images to a Docker registry, terminating CNFC instances, and creating Docker
volumes and networks. The CNFs, that may be network functions packaged as
30 containers, may consist of multiple CNFCs. The DSA [1126] facilitates the
deployment, configuration, and management of these components by interacting
16
with Docker's API, ensuring proper setup and scalability within a containerized
environment. The DSA provides a modular and flexible framework for handling
network functions in a virtualized network setup.
5 [0052] FIG. 2 illustrates an exemplary block diagram of a computing device [200]
upon which the features of the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure. In an
implementation, the computing device [200] may also implement a method for
managing allocation of network resources in the telecommunication network
10 utilising the system [300]. In another implementation, the computing device [200]
itself implements the method for managing allocation of network resources in the
telecommunication network using one or more units configured within the
computing device [200], wherein said one or more units are capable of
implementing the features as disclosed in the present disclosure.
15
[0053] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
processor [204] coupled with the bus [202] for processing information. The
hardware processor [204] may be, for example, a general-purpose microprocessor.
20 The computing device [200] may also include a main memory [206], such as a
random-access memory (RAM), or other dynamic storage device, coupled to the
bus [202] for storing information and instructions to be executed by the processor
[204]. The main memory [206] also may be used for storing temporary variables or
other intermediate information during execution of the instructions to be executed
25 by the processor [204]. Such instructions, when stored in non-transitory storage
media accessible to the processor [204], render the computing device [200] into a
special-purpose machine that is customized to perform the operations specified in
the instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
30 information and instructions for the processor [204].
17
[0054] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
5 Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
[204]. Another type of user input device may be a cursor controller [216], such as a
10 mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
15
[0055] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
20 According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
25 contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
30 [0056] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a two-
18
way data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
5 telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
electromagnetic or optical signals that carry digital data streams representing
10 various types of information.
[0057] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
15 transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], a host [224] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
execution.
20
[0058] Referring to FIG. 3, an exemplary block diagram of a system [300] for
allocation of the one or more network resources in the telecommunication network,
is shown, in accordance with the exemplary implementations of the present
disclosure.
25
[0059] In an example, the system [300] may be implemented as or within a PEEGN.
Such PEEGN may be understood as the PEEGN [1088] as explained in conjunction
with FIG. 1. The explanation for the same has not been provided here again for the
sake of brevity.
30
19
[0060] The system [300] may include at least one first processor [302]. In cases
where the system [300] may be implemented as the PEEGN [1088], the first
processor [302] may reside at the policy execution engine (PEEGN) [1088].
5 [0061] Also, all of the components/ units of the system [300] are assumed to be
connected to each other unless otherwise indicated below. As shown in FIG. 3, all
units shown within the system [300] should also be assumed to be connected to
each other. Also, in FIG. 3 only a few units are shown, however, the system [300]
may comprise multiple such units or the system [300] may comprise any such
10 numbers of said units, as required to implement the features of the present
disclosure. Further, in an implementation, the system [300] may be present in a user
device/ user equipment to implement the features of the present disclosure. The
system [300] may be a part of the user device/ or may be independent of but in
communication with the user device (may also referred herein as a UE). In another
15 implementation, the system [300] may reside in a server or a network entity. In yet
another implementation, the system [300] may reside partly in the server/ network
entity and partly in the user device.
[0062] The system [300] is configured for allocation of the one or more network
20 resources in the telecommunication network, with the help of the interconnection
between the components/units of the system [300].
[0063] Referring to FIG. 4, an exemplary environment [400] for implementing the
present disclosure is shown, in accordance with exemplary implementations of the
25 present disclosure.
[0064] It may be noted that the FIG. 3 and FIG. 4 are explained in conjunction in
the foregoing description for explanation/ description of the present disclosure.
30 [0065] In another example, the system [300] may be configured for allocation of
the one or more network resources in the telecommunication network with the help
20
of interconnection between the components/unit of the environment [400] and in
another example, the components/ units of the system architecture [100].
[0066] As depicted in FIG. 4, the exemplary environment [400] provides a physical
5 and virtual inventory manager (PVIM) [402] and a Policy Execution Engine
(PEEGN) Cluster [406]. The PVIM [402] and the PEEGN Cluster [406] may be
connected to each other with the PE_IM interface [410], which enables
communication by exchanging information among each entity.
10 [0067] The PVIM [402] may be considered to be similar to the Physical and Virtual
Resource Manager as already provided in the above system architecture [100], and
may perform similar functions.
[0068] The interface PE_IM [410] between the PEEGN [1088] and the PVIM [402]
15 may be the interface provided within the HTTP Protocols and other protocols which
may be well known in the art.
[0069] The PEEGN cluster [406] may refer to a cluster of multiple PEEGN
instances, and may also comprise a database [408]. The PEEGN instance, as
20 depicted in FIG. 4, may be understood and referred to as the system [300]. The
database [408] may refer to a relational or non-relational database used for storing
and fetching various information.
[0070] In one example, the system [300] may be in communication with other
25 network entities/ components as well, which have not been depicted in FIG. 4. Such
network entities/components may be well understood to a person skilled in the art,
and have not been explained here for the sake of brevity.
[0071] Continuing further, the PVIM [402] may include a second processor [404].
30 In an example, the first processor [302] and the second processor [404] may be
21
directly in communication with each other via the PE_IM interface [410] for
implementation of the present disclosure.
[0072] In operation, for allocation of the one or more network resources in the
5 telecommunication network, the first processor [302] transmits an event request for
a resource allocation to a second processor [404]. The event request may refer to a
request for resource allocation which may be sent to the second processor [404]
based on an event during which the PEEGN [1088] may request the PVIM [402]
for allocation of the resources. As would be understood, resource allocation may
10 refer to allocation of one or more network resources such as compute resource,
memory, bandwidth, etc. The PVIM [402] manages the inventory and resources
present within the network and hence would be able to allocate the network
resources to the PEEGN [1088] when requested. In an example, the event request
may be transmitted using the HTTP based events.
15
[0073] In an implementation of the present disclosure, the event request associated
with resource requirements may be received during an instantiation of a network
function. In an example, the instantiation of the network function may refer to an
event during which the network function has been initialized.
20
[0074] Then, after the transmission of the request, the first processor [302] receives
an acknowledgement of the event request. The acknowledgement comprises at least
a resource availability information for one or more available resources from the
second processor [404]. The resource availability information may comprise
25 information associated with availability and non-availability of the network
resource. In case the network resources are available, then the information
associated with the network resource that are available (i.e. the one or more
available resources) may be transmitted by the second processor [404] of the PVIM
[402] or the PVIM [402] itself. It may be noted that the information associated with
30 the one or more available resources may be sent along with the acknowledgement
or within the acknowledgement.
22
[0075] In another exemplary implementations of the present disclosure, the
resource availability information comprises information of the one or more
available resources for scaling and instantiation of the network function. The
5 information of the one or more available resources may be then used for scaling and
instantiation of the network function. As would be understood, scaling may refer to
altering the resources allocated to a virtualized network function, or a containerized
network function. Such scaling may result in increase or decrease of the network
resources allocated to a particular network function. The PEEGN [1088] may utilize
10 the network resources requested that would be received by the PVIM [402] for
increasing the network resources allocated to a particular network function.
[0076] In an exemplary implementation, the availability information may be stored
in a memory or a database [408], which may be based on relational or non-relational
15 database structures such as a NoSQL database or the platform NoSQL Database
[1098].
[0077] Continuing further, the first processor [302] selects at least one resource
from the one or more available resources based on an analysis of the resource
20 availability information. After, the information associated with the one or more
available resources is received, then from the information associated with the one
or more available resource, one or more resources are selected based on the
requirements of the PEEGN [1088] which may be based on the requirements of a
particular network function for which the resources are allocated.
25
[0078] In an exemplary implementation of the present disclosure, the first processor
[302] may select a set of hosts or servers from an available host information
received in the acknowledgement. In such implementations of the present
disclosure, the acknowledgment may comprise information associated with the one
30 or more available resources, which may comprise network resources in the form of
available hosts or servers within the network. As would be understood, the hosts or
23
servers may refer to the one or more network resources having the capability of
performing compute actions, store information in the memory, allocate bandwidth,
etc. Further, the available host information in such case may refer to the information
associated with the hosts or servers present within the acknowledgement in form of
5 the information associated with the one or more available resources.
[0079] After the at least one resource is selected, the first processor [302] transmits
a resource reservation request for the selected at least one resource to the second
processor [404]. The resource reservation request may refer to a request or a
10 command which may be sent by the PEEGN [1088] or the first processor of the
PEEGN [1088] for reserving the at least one resource that is selected. The resource
reservation request may be based on the requirements of the network resources
which may be requested by the network function from the PEEGN [1088].
15 [0080] Based on the resource reservation request, the first processor [302] receives
a reservation response for the selected at least one resource from the second
processor [404]. The reservation response may refer to a response which may be
received from the PVIM [402] after the selected at least one resource may be
reserved or not reserved based on the resource reservation request.
20
[0081] In an implementation of the present disclosure, the reservation response may
be at least one of a positive reservation response and a negative reservation
response. As would be understood, the positive reservation response may be a
response indicating a success of the reservation of the one or more available
25 resources, and the negative reservation response may be a response indicating a
failure of the reservation of the one or more available resources. The positive
reservation response may be received by the first [302] processor in an event the
selected at least one resource from the one or more available resources is
successfully reserved. The negative reservation response may be received by the
30 first processor [302] in an event the selected at least one resource from the one or
more available resources is unsuccessfully reserved. In such implementations, the
24
positive reservation response or the negative reservation response may be received
in the case of successful or unsuccessful reservation of the one or more available
resources.
5 [0082] Referring to FIG. 5, an exemplary method flow diagram [500] for allocation
of the one or more network resources in the telecommunication network, in
accordance with exemplary implementations of the present disclosure is shown. In
an implementation the method [500] is performed by the system [300]. In another
implementation, the method [500] may be performed by the environment [400].
10 Further, in an implementation, the system [300] may be present in a server device
to implement the features of the present disclosure. Also, as shown in FIG. 5, the
method [500] starts at step [502].
[0083] In an exemplary aspect of the present disclosure, the first processor [302]
15 may reside at a policy execution engine (PEEGN) [1088]. In another exemplary
aspect of the present disclosure, the second processor [404] may reside at a physical
and virtual inventory management (PVIM) [402]. In such implementations, the first
processor [302] of the PEEGN [1088] may be connected with the second processor
[404] of the PVIM [402]. The PVIM [402] may be considered to be similar to the
20 Physical and Virtual Resource Manage [1050] as already provided in the above
system architecture [100], and may perform similar functions. Further, in such
implementations, the PEEGN [1088] and the PVIM [402] may be connected to each
other via an interface which enables communication by exchanging information
among each entity. The interface between the PEEGN [1088] and the PVIM [402]
25 may be the interface provided within the HTTP Protocols and other protocols which
may be well known in the art. Such interfaces, may be called as the PE_IM interface
[410].
[0084] For allocation of the one or more network resources in the
30 telecommunication network, the method [500] at step [504], comprises
transmitting, by the first processor [302], an event request for a resource allocation
25
to a second processor [404]. The event request may refer to a request for resource
allocation which may be sent to the second processor [404] based on an event during
which the PEEGN [1088] may request the PVIM [402] for allocation of the
resources. As would be understood, resource allocation may refer to allocation of
5 one or more network resources such as compute resource, memory, bandwidth, etc.
The PVIM [402] manages the inventory and resources present within the network
and hence would be able to allocate the network resources to the PEEGN [1088]
when requested. In an example, the event request may be transmitted using the
HTTP based events.
10
[0085] In an implementation of the present disclosure, the event request associated
with resource requirements may be received during an instantiation of a network
function. In an example, the instantiation of the network function may refer to an
event during which the network function has been initialized.
15
[0086] Then, after the transmission of the request, at step [506], the method [500]
involves receiving, by the first processor [302], an acknowledgement of the event
request. The acknowledgement comprises at least a resource availability
information for one or more available resources from the second processor [404].
20 The resource availability information may comprise information associated with
availability and non-availability of the network resource. In case the network
resources are available, then the information associated with the network resource
that are available (i.e. the one or more available resources) may be transmitted by
the second processor [404] of the PVIM [402] or the PVIM [402] itself. It may be
25 noted that the information associated with the one or more available resources may
be sent along with the acknowledgement or within the acknowledgement.
[0087] In another exemplary implementations of the present disclosure, the
resource availability information comprises information of the one or more
30 available resources for scaling and instantiation of the network function. The
information of the one or more available resources may be then used for scaling and
26
instantiation of the network function. As would be understood, scaling may refer to
altering the resources allocated to a virtualized network function, or a containerized
network function. Such scaling may result in increase or decrease of the network
resources allocated to a particular network function. The PEEGN [1088] may utilize
5 the network resources requested that would be received by the PVIM [402] for
increasing the network resources allocated to a particular network function.
[0088] In an exemplary implementation, the availability information may be stored
in a memory or a database, which may be based on relational or non-relational
10 database structures such as a NoSQL database or the platform NoSQL Database
[1098].
[0089] Continuing further, at step [508], the method [500] leads to selecting, by the
first processor [302], at least one resource from the one or more available resources
15 based on an analysis of the resource availability information. After, the information
associated with the one or more available resources is received, then from the
information associated with the one or more available resource, one or more
resources are selected based on the requirements of the PEEGN [1088] which may
be based on the requirements of a particular network function for which the
20 resources are allocated.
[0090] In an exemplary implementation of the present disclosure, the first processor
[302] may select a set of hosts or servers from an available host information
received in the acknowledgement. In such implementations of the present
25 disclosure, the acknowledgment may comprise information associated with the one
or more available resources, which may comprise network resources in the form of
available hosts or servers within the network. As would be understood, the hosts or
servers may refer to the one or more network resources having the capability of
performing compute actions, store information in the memory, allocate bandwidth,
30 etc. Further, the available host information in such case may refer to the information
27
associated with the hosts or servers present within the acknowledgement in form of
the information associated with the one or more available resources.
[0091] After the at least one resource is selected, then at step [510], the method [5]
5 involves transmitting, by the first processor [302], a resource reservation request
for the selected at least one resource to the second processor [404]. The resource
reservation request may refer to a request or a command which may be sent by the
PEEGN [1088] or the first processor of the PEEGN [1088] for reserving the at least
one resource that is selected. the resource reservation request may be based on the
10 requirements of the network resources which may be requested by the network
function from the PEEGN [1088].
[0092] Based on the resource reservation request, then at step [512], the method
[500] involves receiving, by the first processor [302], a reservation response for the
15 selected at least one resource from the second processor [404]. The reservation
response may refer to a response which may be received from the PVIM [402] after
the selected at least one resource may be reserved or not reserved based on the
resource reservation request.
20 [0093] In an implementation of the present disclosure, the reservation response may
be at least one of a positive reservation response and a negative reservation
response. As would be understood, the positive reservation response may be a
response indicating a success of the reservation of the one or more available
resources, and the negative reservation response may be a response indicating a
25 failure of the reservation of the one or more available resources. The positive
reservation response may be received by the first [302] processor in an event the
selected at least one resource from the one or more available resources is
successfully reserved. The negative reservation response may be received by the
first processor [302] in an event the selected at least one resource from the one or
30 more available resources is unsuccessfully reserved. In such implementations, the
positive reservation response or the negative reservation response may be received
28
in the case of successful or unsuccessful reservation of the one or more available
resources.
[0094] Thereafter, at step [514], the method [500] is terminated.
5
[0095] Referring to FIG. 6, an exemplary method flow diagram [600] for allocation
of the one or more network resources in the telecommunication network, in
accordance with exemplary implementations of the present disclosure is shown. In
an implementation the method [600] is performed by the system [300]. In another
10 implementation, the method [600] may be performed by the environment [400].
Further, in an implementation, the system [300] may be present in a server device
to implement the features of the present disclosure. Also, as shown in FIG. 6, the
method [600] starts at step [602].
15 [0096] For allocation of the one or more network resources in the
telecommunication network, the method [600] at step [604] involves receiving a
request for resource allocation from any network function or a microservice
associated with network functions within the telecommunication networks.
20 [0097] Then at step [606], the method [600] involves handling events based on the
event request. The event handling may be that multiple requests for resource
allocation may be received which needs to be handled. Accordingly, event handling
handles all the requests and accordingly proceed with the next steps based on the
event handling. In an example, at step [608], the requests for resource allocation
25 may be stored in the database for storing the resource allocation request, if such
requests are not processed yet. In another example, the event handling may proceed
to the next step [610] after handling the multiple requests.
[0098] At step [610], the PEEGN [1088] may send the request to PVIM [402] for
30 resource allocation that may be transmitted in form of the event request, which may
be received during an instantiation of a network function or microservice.
29
[0099] Then, after the transmission of the request, at step [612], the method [600]
involves receiving the acknowledgement of the event request at the PEEGN [1088].
The acknowledgement may comprise the resource availability information for the
5 one or more available resources. In case the network resources are available, then
the information associated with the network resource that are available (i.e. the one
or more available resources) may be transmitted by the second processor [404] of
the PVIM [402] or the PVIM [402] itself. It may be noted that the information
associated with the one or more available resources may be sent along with the
10 acknowledgement or within the acknowledgement.
[0100] Continuing further, at step [614], the method [600] leads to selecting at least
one resource from the one or more available resources based on an analysis of the
resource availability information. After, the information associated with the one or
15 more available resources is received, then from the information associated with the
one or more available resource, one or more resources are selected based on the
requirements of the PEEGN [1088] which may be based on the requirements of a
particular network function or the microservice from which the request for resource
allocation was received. It may be noted that at step [614], the event for selecting
20 the at least one resource may be handled and then accordingly after the selection of
the one or more resources, the event may accordingly move to the next step.
[0101] At step [616], the method [600] may lead to forming a loop by moving to
step [606] until all of the request for resource allocation received by the PEEGN
25 [1088] are handled. This ensures that all of the events and requests resource
allocation has been served. It may be noted that in an example, the PEEGN [1088]
may transmit a resource reservation request for the selected at least one resource.
[0102] At step [618], the method [600] may involve storing the availability
30 information in a memory or a database [408], which may be based on relational or
30
non-relational database structures such as a NoSQL database or the platform
NoSQL Database [1098].
[0103] Based on the resource reservation request, then at step [620], the method
5 [600] involves receiving a reservation response for the selected at least one resource
from the PVIM [402] which may be then sent to the respective microservice/
network function by the PEEGN [1088]. The reservation response may be at least
one of a positive reservation response and a negative reservation response for
allocation of the network resources. In an implementation, the acknowledgement
10 may be transmitted to the network function or the microservice which requested the
resource allocation, and such acknowledgement may comprise the resource
availability information of the one or more available resources.
[0104] Thereafter, at step [622], the method [600] may be stopped and terminated.
15
[0105] The present disclosure further discloses a non-transitory computer readable
storage medium storing instructions for allocation of one or more network resources
in the telecommunication network. The instructions include executable code which,
when executed by one or more units of a system [300], causes a first processor [302]
20 of the system [300] to transmit an event request for a resource allocation to a second
processor [404]. Further, the instructions include executable code which, when
executed, causes the first processor [302] to receive an acknowledgement of the
event request from the second processor [404]. The acknowledgement comprises at
least a resource availability information for one or more available resources.
25 Further, the instructions include executable code which, when executed, causes the
first processor [302] to select at least one resource from the one or more available
resources based on an analysis of the resource availability information. Further, the
instructions include executable code which, when executed, causes the first
processor [302] to transmit a resource reservation request for the selected at least
30 one resource to the second processor [404]. Further, the instructions include
executable code which, when executed, causes the first processor [302] to receive
31
a reservation response for the selected at least one resource from the second
processor [404].
[0106] As is evident from the above, the present disclosure provides a technically
5 advanced solution for allocation of the one or more network resources in the
telecommunication network. The present solution provides an async event-based
implementation to utilize the interface efficiently. In addition, the present invention
provides fault tolerance for any event failure. The interface provided by the present
disclosure works in a high availability mode and if one inventory instance went
10 down during request processing, then next available instance takes care of the
request.
[0107] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and
15 that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.
20
[0108] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
25 configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
30 of the present disclosure.
32
We Claim:
1. A method for allocation of one or more network resources in a
telecommunication network, the method comprising:
5 - transmitting, by a first processor [302], an event request for a resource
allocation to a second processor [404];
- receiving, by the first processor [302], an acknowledgement of the event
request, wherein the acknowledgement comprises at least a resource
availability information for one or more available resources from the second
10 processor [404];
- selecting, by the first processor [302], at least one resource from the one or
more available resources based on an analysis of the resource availability
information;
- transmitting, by the first processor [302], a resource reservation request for
15 the selected at least one resource to the second processor [404]; and
- receiving, by the first processor [302], a reservation response for the
selected at least one resource from the second processor [404].
2. The method as claimed in claim 1, further comprising: selecting, by the first
20 processor [302], a set of hosts or servers from an available host information received
in the acknowledgement.
3. The method as claimed in claim 1, wherein the event request associated with
resource requirements is transmitted during an instantiation of a network function.
25
4. The method as claimed in claim 1, wherein the first processor [302] resides
at a policy execution engine (PEEGN) [1088].
5. The method as claimed in claim 1, wherein the second processor [404]
30 resides at a Physical and Virtual Inventory Management (PVIM) [402].
33
6. The method as claimed in claim 1, wherein the resource availability
information comprises information of the one or more available resources for
scaling and the instantiation of the network function.
5 7. The method as claimed in claim 1, wherein the reservation response is at
least one of a positive reservation response and a negative reservation response,
wherein the positive reservation response is received by the first processor
[302] in an event the selected at least one resource from the one or more available
resources is successfully reserved; and
10 wherein the negative reservation response is received by the first processor
[302] in an event the selected at least one resource from the one or more available
resources is unsuccessfully reserved.
8. A system [300] for allocation of one or more network resources in a
15 telecommunication network, the system [300] comprising:
- a first processor [302] configured to:
o transmit an event request for a resource allocation to a second
processor [404];
o receive an acknowledgement of the event request, wherein the
20 acknowledgement comprises at least a resource availability
information for one or more available resources from the second
processor [404];
o select at least one resource from the one or more available resources
based on an analysis of the resource availability information;
25 o transmit a resource reservation request for the selected at least one
resource to the second processor [404]; and
o receive a reservation response for the selected at least one resource
from the second processor [404].
34
9. The system [300] as claimed in claim 8, wherein the first processor [302] is
further configured to: select a set of hosts or servers from an available host
information received in the acknowledgement.
5 10. The system [300] as claimed in claim 8, wherein the event request
associated with resource requirements is transmitted during an instantiation of a
network function.
11. The system [300] as claimed in claim 8, wherein the first processor [302]
10 resides at a policy execution engine (PEEGN) [1088].
12. The system [300] as claimed in claim 8, wherein the second processor [404]
resides at a Physical and Virtual Inventory Management (PVIM) [402].
15 13. The system [300] as claimed in claim 8, wherein the resource availability
information comprises information of the one or more available resources for
scaling and instantiation of the network function.
14. The system [300] as claimed in claim 8, wherein the reservation response is
20 at least one of a positive reservation response and a negative reservation response,
wherein the positive reservation response is received by the first processor
[302] in an event the selected at least one resource from the one or more available
resources is successfully reserved; and
wherein the negative reservation response is received by the first processor
25 [302] in an event the selected at least one resource from the one or more available
resources is unsuccessfully reserved.
| # | Name | Date |
|---|---|---|
| 1 | 202321065356-STATEMENT OF UNDERTAKING (FORM 3) [28-09-2023(online)].pdf | 2023-09-28 |
| 2 | 202321065356-PROVISIONAL SPECIFICATION [28-09-2023(online)].pdf | 2023-09-28 |
| 3 | 202321065356-POWER OF AUTHORITY [28-09-2023(online)].pdf | 2023-09-28 |
| 4 | 202321065356-FORM 1 [28-09-2023(online)].pdf | 2023-09-28 |
| 5 | 202321065356-FIGURE OF ABSTRACT [28-09-2023(online)].pdf | 2023-09-28 |
| 6 | 202321065356-DRAWINGS [28-09-2023(online)].pdf | 2023-09-28 |
| 7 | 202321065356-Proof of Right [15-02-2024(online)].pdf | 2024-02-15 |
| 8 | 202321065356-FORM-5 [27-09-2024(online)].pdf | 2024-09-27 |
| 9 | 202321065356-ENDORSEMENT BY INVENTORS [27-09-2024(online)].pdf | 2024-09-27 |
| 10 | 202321065356-DRAWING [27-09-2024(online)].pdf | 2024-09-27 |
| 11 | 202321065356-CORRESPONDENCE-OTHERS [27-09-2024(online)].pdf | 2024-09-27 |
| 12 | 202321065356-COMPLETE SPECIFICATION [27-09-2024(online)].pdf | 2024-09-27 |
| 13 | 202321065356-FORM 3 [08-10-2024(online)].pdf | 2024-10-08 |
| 14 | 202321065356-Request Letter-Correspondence [11-10-2024(online)].pdf | 2024-10-11 |
| 15 | 202321065356-Power of Attorney [11-10-2024(online)].pdf | 2024-10-11 |
| 16 | 202321065356-Form 1 (Submitted on date of filing) [11-10-2024(online)].pdf | 2024-10-11 |
| 17 | 202321065356-Covering Letter [11-10-2024(online)].pdf | 2024-10-11 |
| 18 | 202321065356-CERTIFIED COPIES TRANSMISSION TO IB [11-10-2024(online)].pdf | 2024-10-11 |
| 19 | Abstract.jpg | 2024-11-07 |
| 20 | 202321065356-ORIGINAL UR 6(1A) FORM 1 & 26-311224.pdf | 2025-01-04 |