Abstract: The present disclosure relates to a method and a system for receiving a set of target configuration parameters. The method includes receiving, by a transceiver unit [402] via a command line interface (CLI), a trigger comprising one or more commands. Further, the method includes loading, by a processing unit [404] at a storage unit [408], the one or more commands. The method further includes executing, by the processing unit [404] at one or more platform scheduler (PS) microservice instances, the one or more commands. Furthermore, the method includes receiving, by the transceiver unit [402] via the CLI, a set of target configuration parameters based on an execution of the set of commands. FIG. 5
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR RECEIVING A SET OF
TARGET CONFIGURATION PARAMETERS”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat,
India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR RECEIVING A SET OF TARGET
CONFIGURATION PARAMETERS
FIELD OF DISCLOSURE
5
[0001] Embodiment of the present disclosure generally relate to a field of wireless
communication. More particularly, the present disclosure relates to a method and a
system for receiving a set of target configuration parameters.
10 BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
15 present disclosure. However, it should be appreciated that this section is used only
to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of the prior art.
[0003] In modern distributed computing environments, microservices have
20 emerged as a popular architectural paradigm due to their modularity and scalability.
Among the various microservices, the Platform Scheduler (PS) microservice plays
a crucial role in managing and coordinating job and task executions across the
system. However, the interaction between multiple instances of the PS microservice
(may also be referred as PS microservice instance or PS instance) may introduce
25 significant latency in the network, particularly when the scheduler needs to request
information or resources from other microservices.
[0004] As systems scale and grow, the demands on the Platform Scheduler increase
correspondingly. The scheduler is often required to handle a growing volume of
30 requests for task coordination, which can exacerbate issues related to network
latency and overall system performance. In particular, service outages or failures
3
within the scheduling process can disrupt communication and lead to inefficiencies
or delays in job execution.
[0005] Current solutions to these problems may address individual aspects of the
5 scheduling and coordination process but often fail to comprehensively mitigate the
issues associated with increased latency and service disruptions. Consequently,
there remains a need for more effective methods and systems to enhance the
performance and reliability of the Platform Scheduler, particularly in large-scale,
distributed environments where the volume of tasks and interactions continues to
10 expand.
[0006] Hence, in view of these and other existing limitations, there arises an
imperative need to provide an efficient solution to overcome the above-mentioned
and other limitations.
15
SUMMARY
[0007] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
20 This summary is not intended to identify the key features or the scope of the claimed
subject matter.
[0008] An aspect of the present disclosure may relate to a method for receiving a
set of target configuration parameters. The method comprises receiving, by a
25 transceiver unit via a command line interface (CLI), a trigger comprising one or
more commands. Further, the method comprises loading, by a processing unit at a
storage unit, the one or more commands. The method further comprises executing,
by the processing unit at one or more platform scheduler (PS) microservice
instances, the one or more commands. Furthermore, the method comprises
30 receiving, by the transceiver unit via the CLI, a set of target configuration
parameters based on an execution of the set of commands.
4
[0009] In an exemplary aspect of the present disclosure, the one or more commands
comprises at least one configuration parameter and one or more values associated
with the configuration parameter.
5
[0010] In an exemplary aspect of the present disclosure, the one or more commands
are executed by the processing unit, at the one or more PS instances, via the
Command Line Interface (CLI).
10 [0011] In an exemplary aspect of the present disclosure, the one or more PS
instances are used to create and schedule jobs on behalf of one or more
microservices, wherein the one or more microservices comprises at least one of:
command execution management and displaying configuration parameters.
15 [0012] In an exemplary aspect of the present disclosure, the one or more commands
are executed via a respective PS microservice instance from the one or more PS
microservice instances.
[0013] In an exemplary aspect of the present disclosure, the method further
20 comprises receiving, by the transceiver unit, a status associated with the one or more
PS microservice instances, wherein the status is at least one of a running instance
and a down instance.
[0014] In an exemplary aspect of the present disclosure, the method further
25 comprises displaying, at a communication unit, the set of target configuration
parameters through the command line interface (CLI), based on the execution of
the one or more commands.
[0015] Another aspect of the present disclosure relates to a system for receiving a
30 set of target configuration parameters. The system comprises a transceiver unit
configured to receive, via a Command Line Interface (CLI), a trigger comprising
one or more commands. Further, the system comprises a processing unit connected
to at least the transceiver unit, the processing unit is configured to load, at a storage
5
unit, the one or more commands. The processing unit is further configured to
execute, at one or more platform scheduler (PS) microservice instances, the one or
more commands. Furthermore, the transceiver unit is configured to receive, via the
CLI, a set of target configuration parameters based on an execution of the one or
5 more commands.
[0016] Another aspect of the present disclosure may relate to a user equipment
(UE) for receiving a set of target configuration parameters. The UE comprises a
processor and a memory, coupled to the processor, to store instructions for the
10 processor for receiving a set of target configuration parameters. The processor may
receive the set of target configuration parameters based on receiving, by a
transceiver unit via a command line interface (CLI), a trigger comprising one or
more commands. Further, the processor may receive the set of target configuration
parameters based on loading, by a processing unit at a storage unit, the one or more
15 commands. The processor may further receive the set of target configuration
parameters based on executing, by the processing unit at one or more platform
scheduler (PS) microservice instances, the set of commands. Furthermore, the
processor may receive the set of target configuration parameters based on receiving,
by the transceiver unit via the CLI, a set of target configuration parameters based
20 on an execution of the set of commands.
[0017] Yet another aspect of the present disclosure may relate to a non-transitory
computer-readable storage medium storing instruction for creating backup of a
network function the storage medium comprising executable code which, when
25 executed by one or more units of a system, causes a transceiver unit to receive, via
a Command Line Interface (CLI), a trigger comprising one or more commands.
Further, the executable code when executed causes a processing unit to load, at a
storage unit, the one or more commands. Further, the executable code when
executed causes the processing unit to execute, at one or more platform scheduler
30 (PS) microservice instances, the one or more commands. Furthermore, the
executable code when executed causes the transceiver unit to receive, via the CLI,
6
a set of target configuration parameters based on an execution of the one or more
commands.
OBJECTS OF THE DISCLOSURE
5
[0018] Some of the objects of the present disclosure which at least one embodiment
disclosed herein satisfies are listed herein below.
[0019] It is an object of the present disclosure to provide a system and method for
10 receiving a set of target configuration parameters.
[0020] It is another object of the present disclosure to provide a solution to display
configuration parameters of the microservice.
15 [0021] It is another object of the present disclosure to provide a solution to set the
value of any configuration parameter mentioned in the commands excel at runtime.
[0022] It is another object of the present disclosure to provide a solution to execute
the microservice commands either at a specific instance or all instances of the
20 microservice.
[0023] It is another object of the present disclosure to provide a solution to display
the status of all the instances of the Microservice.
25 [0024] It is another object of the present disclosure to provide a solution to show
information of running instance as well as down instance along with its down time.
[0025] It is another object of the present disclosure to provide a solution to gives
information of the down instances of the microservice along with result of the
30 commands.
7
[0026] It is another object of the present disclosure to provide a solution to display
Fault, Configuration, Accounting, Performance and Security (FCAPS) of the
Microservices.
5 [0027] It is another object of the present disclosure to provide a solution to clear
value of the FCAPS.
[0028] It is yet another object of the present disclosure to provide a solution to
execute a specific task whenever user wants.
10
BRIEF DESCRIPTION OF DRAWNGS
[0029] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
15 and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
20 according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
25 [0030] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture, in accordance with
exemplary embodiments of the present disclosure.
[0031] FIG. 2 illustrates an exemplary block diagram of a system architecture for
30 scheduling the task, in accordance with exemplary embodiments of the present
disclosure.
8
[0032] FIG. 3 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented, in accordance
with exemplary implementation of the present disclosure.
5
[0033] FIG. 4 illustrates an exemplary block diagram of a system for receiving a
set of target configuration parameters, in accordance with exemplary
implementation of the present disclosure.
10 [0034] FIG. 5 illustrates an exemplary signalling flow diagram for receiving a set
of target configuration parameters, in accordance with exemplary implementation
of the present disclosure.
[0035] FIG. 6 illustrates an exemplary method flow diagram for receiving a set of
15 target configuration parameters, in accordance with exemplary implementation of
the present disclosure.
[0036] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
20
DETAILED DESCRIPTION
[0037] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
25 embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
30 problems discussed above.
9
[0038] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
5 It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
[0039] Specific details are given in the following description to provide a thorough
10 understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
15
[0040] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
20 concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
included in a figure.
[0041] The word “exemplary” and/or “demonstrative” is used herein to mean
25 serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
30 known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
10
description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
5 [0042] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a Digital
10 Signal Processing (DSP) core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor or
15 processing unit is a hardware processor.
[0043] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
20 communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
25 of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from unit(s) which
are required to implement the features of the present disclosure.
[0044] As used herein, “storage unit” or “memory unit” refers to a machine or
30 computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
11
medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
5 functions.
[0045] As used herein “interface” or “user interface refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
10 communication or interaction of one or more modules or one or more units with
each other, which also includes the methods, functions, or procedures that may be
called.
[0046] All modules, units, components used herein, unless explicitly excluded
15 herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor, a
digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
20 circuits (FPGA), any other type of integrated circuits, etc.
[0047] As used herein the transceiver unit include at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
information or a combination thereof between units/components within the system
25 and/or connected with the system.
[0048] As discussed in the background section, the current known solutions have
several shortcomings. The present disclosure aims to overcome the abovementioned and other existing problems in this field of technology by providing
30 method and system for receiving a set of target configuration parameters. More
particularly, the present disclosure provides a solution to display configuration
12
parameters of the microservice. Further, the present disclosure provides a solution
to set the value of any configuration parameter mentioned in the commands excel
at runtime. The present disclosure further provides a solution to execute the
microservice commands either at a specific instance or all instances of the
5 microservice. Further, the present disclosure provides a solution to display the status
of all the instances of the Microservice. Further, the present disclosure provides a
solution to show information of running instance as well as down instance along
with its down time. Furthermore, the present disclosure provides a solution to give
information of the down instances of the microservice along with result of the
10 commands. Moreover, the present disclosure provides a solution to execute a
specific task whenever user wants.
[0049] Hereinafter, exemplary embodiments of the present disclosure will be
described with reference to the accompanying drawings.
15
[0050] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture [100], in accordance with an
exemplary implementation of the present disclosure. The MANO architecture [100]
is developed for managing telecom cloud infrastructure automatically, managing
20 design or deployment design, managing instantiation of a network node(s) etc. The
MANO architecture [100] deploys the network node(s) in the form of Virtual
Network Function (VNF) and Cloud-native/ Container Network Function (CNF).
The system may comprise one or more components of the MANO architecture. The
MANO architecture [100] is used to auto-instantiate the VNFs into the
25 corresponding environment of the present disclosure so that it could help in
onboarding other vendor(s) CNFs and VNFs to the platform. In an implementation,
the system comprises a NFV Platform Decision Analytics (NPDA) [212]
component.
30 [0051] As shown in FIG. 1, the MANO architecture [100] comprises a user
interface layer, a network function virtualization (NFV) and software defined
13
network (SDN) design function module [104], a platform foundation services
module [106], a platform core services module [108] and a platform resource
adapters and utilities module [112], wherein all the components are assumed to be
connected to each other in a manner as obvious to the person skilled in the art for
5 implementing features of the present disclosure.
[0052] The NFV and SDN design function module [104] further comprises a VNF
lifecycle manager (compute) [1042]; a VNF catalogue [1044]; a network services
catalogue [1046]; a network slicing and service chaining manager [1048]; a
10 physical and virtual resource manager [1050] and a CNF lifecycle manager [1052].
The VNF lifecycle manager (compute) [1042] is responsible for on which server of
the communication network the microservice will be instantiated. The VNF
lifecycle manager (compute) [1042] will manage the overall flow of incoming/
outgoing requests during interaction with the user. The VNF lifecycle manager
15 (compute) [1042] is responsible for determining which sequence to be followed for
executing the process. For e.g. in an AMF network function of the communication
network (such as a 5G network), sequence for execution of processes P1 and P2
etc. The VNF catalogue [1044] stores the metadata of all the VNFs (also CNFs in
some cases). The network services catalogue [1046] stores the information of the
20 services that need to be run. The network slicing and service chaining manager
[1048] manages the slicing (an ordered and connected sequence of network service/
network functions (NFs)) that must be applied to a specific networked data packet.
The physical and virtual resource manager [1050] stores logical and physical
inventory of the VNFs. Just like the VNF lifecycle manager (compute) [1042], the
25 CNF lifecycle manager [1052] is similarly used for the CNFs lifecycle
management.
[0053] The platforms foundation services module [106] further comprises a
microservices elastic load balancer [1062]; an identify & access manager [1064]; a
30 command line interface (CLI) [1066]; a central logging manager [1068]; and an
event routing manager [1070]. The microservices elastic load balancer [1062] is
14
used for maintaining the load balancing of the request for the services. The identify
& access manager [1064] is used for logging purposes. The command line interface
(CLI) [1066] is used to provide commands to execute certain processes which
requires changes during the run time. The central logging manager [1068] is
5 responsible for keeping the logs of every services. Theses logs are generated by the
MANO platform [100]. These logs are used for debugging purposes. The event
routing manager [1070] is responsible for routing the events i.e., the application
programming interface (API) hits to the corresponding services.
10 [0054] The platforms core services module [108] further comprises NFV
infrastructure monitoring manager [1082]; an assure manager [1084]; a
performance manager [1086]; a policy execution engine [1088]; a capacity
monitoring manager [1090]; a release management (mgmt.) repository [1092]; a
configuration manager & (GCT) [1094]; an NFV platform decision analytics
15 [1096]; a platform NoSQL DB [1098]; a platform schedulers and cron jobs [1100];
a VNF backup & upgrade manager [1102]; a micro service auditor [1104]; and a
platform operations, administration and maintenance manager [1106]. The NFV
infrastructure monitoring manager [1082] monitors the infrastructure part of the
NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure manager
20 [1084] is responsible for supervising the alarms the vendor is generating. The
performance manager [1086] is responsible for manging the performance counters.
The policy execution engine (PEEGN) [1088] is responsible for all the managing
the policies. The capacity monitoring manager (CMM) [1090] is responsible for
sending the request to the PEEGN [1088]. The release management (mgmt.)
25 repository (RMR) [1092] is responsible for managing the releases and the images
of all the vendor network node. The configuration manager & (GCT) [1094]
manages the configuration and GCT of all the vendors. The NFV platform decision
analytics (NPDA) [1096] helps in deciding the priority of using the network
resources. It is further noted that the policy execution engine (PEEGN) [1088], the
30 configuration manager & (GCT) [1094] and the (NPDA) [1096] work together. The
platform NoSQL DB [1098] is a database for storing all the inventory (both physical
15
and logical) as well as the metadata of the VNFs and CNF. The platform schedulers
and cron jobs [1100] schedules the task such as but not limited to triggering of an
event, traverse the network graph etc. The VNF backup & upgrade manager [1102]
takes backup of the images, binaries of the VNFs and the CNFs and produces those
5 backups on demand in case of server failure. The micro service auditor [1104] audits
the microservices. For e.g., in a hypothetical case, instances not being instantiated
by the MANO architecture [100] using the network resources then the micro service
auditor [1104] audits and informs the same so that resources can be released for
services running in the MANO architecture [100], thereby assuring the services
10 only run on the MANO platform [100]. The platform operations, administration and
maintenance manager [1106] is used for newer instances that are spawning.
[0055] The platform resource adapters and utilities module [112] further comprises
a platform external API adaptor and gateway [1122]; a generic decoder and indexer
15 (XML, CSV, JSON) [1124]; a docker swarm adaptor [1126]; an OpenStack API
adapter [1128]; and a NFV gateway [1130]. The platform external API adaptor and
gateway [1122] is responsible for handling the external services (to the MANO
platform [100]) that requires the network resources. The generic decoder and
indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system in
20 the XML, CSV, JSON format. The docker swarm adaptor [1126] is the interface
provided between the telecom cloud and the MANO architecture [100] for
communication. The OpenStack API adapter [1128] is used to connect with the
virtual machines (VMs). The NFV gateway [1130] is responsible for providing the
path to each services going to/incoming from the MANO architecture [100].
25
[0056] Referring to FIG. 2 an exemplary block diagram of an architecture of a
system [200] for scheduling the task, in accordance with exemplary embodiments
of the present disclosure is illustrated. The system [200] comprises an event routing
manager (ERM) [202], a graphical user (GU) interface [204], a CL interface [206];
30 an edge/ element load balancer (EDGE-LB/ ELB) [208]; an Elastic Search (ES)
[210], a cron and schedulers manager [212], and a virtual network function (VNF)
16
manager [214]. In general, GU interface [204] GUI, is a user interface that allows
users to interact with electronic devices through graphical icons and visual
indicators such as secondary notation. Also, CL interface [206] is a text-based user
interface used to run programs, manage computer files and interact with the
5 computer/system. The VNF manager [214] further manages various virtual
machines (VM). The ES [210] further comprises an ES-DB client [2101]. In an
implementation of the present disclosure, the cron and schedulers manager [212]
performs the functions appertain to the platform schedulers and cron jobs (PSC)
[1100] (as shown in FIG. 1) of the MANO architecture [100] (as shown in FIG. 1).
10 The ERM [202] is used to send the requests between publisher microservice to
subscriber microservice. The ELB [208] is used to send the requests between the
active instances of one microservice to another microservice. The cron and
schedulers manager [212] is a process scheduler that allows one to execute
commands, scripts, and programs following specified schedules via input given
15 through either the graphical user (GU) interface [204] or the CL interface [206].
[0057] The cron and schedulers manager [212] carries out the following functions:
1. Cron Management - It is used to manage all the active and inactive crons
created at the platform core services module.
20 2. Task Management - It is used to manage all the active and inactive tasks
created at the platform core services module.
3. FCAP Management – A Fault, Configuration, Accounting, Performance and
Security (FCAPS) management is done for all the counters and alarms
created at the platform core services module.
25 4. Event Handling – As the name suggests, it is performed by manging all the
events between microservices.
5. High Availability (HA) and Fault Tolerance – The platform core services
module handles all the requests if one running instance goes down, then
another active instance will complete that request.
30
17
[0058] The ES [210] manages the scheduling and execution of events, that is, tasks
that run according to a schedule. The ES [210] keeps the task in the stack data
structure based upon the execution-priority of the task. The ES [210] interact with
the cron and schedulers manager [212] via the ES-DB client [2101]. The VNF
5 manager [214] is a key component of the network functions virtualization (NFV)
management and organization (MANO) architectural framework (as shown in
FIG.1). The NFV defines standards for compute, storage, and networking resources
that can be used to build virtualized network functions. The VNF manager [214]
works in tandem with the NFV to help standardize the functions of virtual
10 networking and increase the interoperability of software-defined networking
elements.
[0059] Referring to FIG. 3 an exemplary block diagram of a computing device
[300] upon which the features of the present disclosure may be implemented in
15 accordance with exemplary implementation of the present disclosure is shown. In
an implementation, the computing device [300] may implement a method for
handling an overload condition in a network by utilising a system [400]. In another
implementation, the computing device [300] itself implements the method for
handling an overload condition in a network using one or more units configured
20 within the computing device [300], wherein said one or more units are capable of
implementing the features as disclosed in the present disclosure.
[0060] The computing device [300] may include a bus [302] or other
communication mechanism for communicating information, and a hardware
25 processor [304] coupled with bus [302] for processing information. The hardware
processor [304] may be, for example, a general-purpose microprocessor. The
computing device [300] may also include a main memory [306], such as a randomaccess memory (RAM), or other dynamic storage device, coupled to the bus [302]
for storing information and instructions to be executed by the processor [304]. The
30 main memory [306] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
18
processor [304]. Such instructions, when stored in non-transitory storage media
accessible to the processor [304], render the computing device [300] into a specialpurpose machine that is customized to perform the operations specified in the
instructions. The computing device [300] further includes a read only memory
5 (ROM) [308] or other static storage device coupled to the bus [302] for storing static
information and instructions for the processor [304].
[0061] A storage device [310], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [302] for storing information and
10 instructions. The computing device [300] may be coupled via the bus [302] to a
display [312], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [314], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
15 bus [302] for communicating information and command selections to the processor
[304]. Another type of user input device may be a cursor controller [316], such as a
mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [304], and for controlling
cursor movement on the display [312]. The input device typically has two degrees
20 of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
[0062] The computing device [300] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
25 and/or program logic which in combination with the computing device [300] causes
or programs the computing device [300] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [300] in response to the processor [304] executing one or more
sequences of one or more instructions contained in the main memory [306]. Such
30 instructions may be read into the main memory [306] from another storage medium,
such as the storage device [310]. Execution of the sequences of instructions
19
contained in the main memory [306] causes the processor [304] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
5
[0063] The computing device [300] also may include a communication interface
[318] coupled to the bus [302]. The communication interface [318] provides a twoway data communication coupling to a network link [320] that is connected to a
local network [322]. For example, the communication interface [318] may be an
10 integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [318] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
15 implementation, the communication interface [318] sends and receives electrical,
electromagnetic or optical signals that carry digital data streams representing
various types of information.
[0064] The computing device [300] can send messages and receive data, including
20 program code, through the network(s), the network link [320] and the
communication interface [318]. In the Internet example, a server [330] might
transmit a requested code for an application program through the Internet [328], the
ISP [326], a host [324], the local network [322] and the communication interface
[318]. The received code may be executed by the processor [304] as it is received,
25 and/or stored in the storage device [310], or other non-volatile storage for later
execution.
[0065] Referring to FIG. 4 an exemplary block diagram of a system [400] for
receiving a set of target configuration parameters, in accordance with exemplary
30 implementation of the present disclosure is illustrated. The system [400] comprises
at least on transceiver unit [402], at least one processing unit [404], at least one
20
communication unit [406], and at least one storage unit [408]. Also, all of the
components/ units of the system [400] are assumed to be connected to each other
unless otherwise indicated below. As shown in the FIG. 4, all units shown within
the system [400] should also be assumed to be connected to each other. Also, in
5 FIG. 4 only a few units are shown, however, the system [400] may comprise
multiple such units or the system [400] may comprise any such numbers of said
units, as required to implement the features of the present disclosure. Further, in an
implementation, the system [400] may reside in a server or the network entity or
the system [300] may be in communication with the network entity to implement
10 the features as disclosed in the present disclosure.
[0066] The system [400] is configured for receiving a set of target configuration
parameters with the help of the interconnection between the components/units of
the system [300]. As would be understood, a configuration parameter is a setting or
15 an option that may be adjusted to control the performance of a program or a system.
Further, the configuration parameter represents a variable that may be set to achieve
a desired system behaviour or performance. Furthermore, the target configuration
parameter is a specific instance or a set of parameters that are targeted for a
particular configuration state. Essentially, target configuration parameters are the
20 desired values or settings that the system aims to achieve or apply after executing
certain commands. Further, the set of target configuration parameters may comprise
one or more target configuration parameters.
[0067] In operation, for receiving the set of target configuration parameters, the
25 transceiver unit [402] is utilized. The transceiver unit [402] is configured to receive,
via a Command Line Interface (CLI), a trigger comprising one or more commands.
As would be understood, the command line interface (CLI) is a text-based interface
where a user may input commands that interact with a system's operating system
and receives an output based on the interaction between the user input and the
30 operating system. Further, the trigger is received, by the transceiver unit [402], at a
platform schedular microservice instance to initiate an action of receiving the set of
21
target configuration parameters, based on the one or more commands received
along with the trigger. The trigger may be received based on at least one of the user
inputs and a scheduled event.
5 [0068] Continuing further, the one or more commands comprise at least one
configuration parameter and one or more values associated with the configuration
parameter. Also, the one or more commands may be at least one of an individual
commands and a series of commands that may be received along with the trigger to
perform specific tasks. Further, the command may be general command or a
10 microservice specific command. A microservice is a small, loosely coupled
distributed service and each microservice is designed to perform a specific function.
Further, each microservice may be developed and deployed independently. Further,
the microservice breaks a service into small and manageable components of
services.
15
[0069] Continuing further, the one or more values associated with the configuration
parameter may be a specific data or information that the configuration parameter
represents. Considering an example, if a configuration parameter is
"max_memory_usage," the value associated with it might be a numeric value like
20 "1024MB." The value determines the actual setting or limit that the parameter
enforces within the system.
[0070] Further, the processing unit [404] is configured to load, at a storage unit
[408], the one or more commands. In one example, when the one or more
25 commands are loaded, , the processing unit [404] may retrieve the one or more
commands and may prepare the one or more commands for execution. To load the
one or more commands, , the processing unit [404] may read or transfer the
commands from the storage unit [408] into a memory of the processing unit [404] .
Further, in another example, when the one or more commands are loaded, the
30 processing unit [404] may make the one or more commands available to the
platform scheduler (PS) microservice instances for the execution.
22
[0071] Continuing further, the processing unit [404] is configured to execute, at one
or more platform scheduler (PS) instances, the one or more commands. In one
example, the PS instance acts as a centralized platform to schedule jobs for all
5 microservices. In another example, the one or more PS instances are used to create
and schedule jobs on behalf of one or more microservices. The one or more
microservices may include microservices such as command execution management
and displaying configuration parameters.
10 [0072] Further, it is to be understood that the above-mentioned examples of the one
or more microservices are not intended to limit the scope, applicability,
configuration of the disclosure. The one or more microservices mentioned are only
exemplary and in no manner is to be construed to limit the scope of the present
subject matter. The one or more microservices may also include other examples and
15 such examples would also lie within the scope of the present subject matter.
[0073] Continuing further, the one or more commands are executed by the
processing unit [404], at the one or more PS instances, via the Command Line
Interface (CLI). Further, the one or more commands are executed via a respective
20 PS microservice instance from the one or more PS instances. In an implementation,
to execute the one or more commands via the respective PS microservice instance,
the processing unit [404] routes the one or more commands to the respective or
appropriate PS microservice instance that is capable of processing them. The
respective PS microservice may be determined based on the specific parameters
25 such as, but not limited to, the command to be executed, the frequency of execution,
etc.
[0074] Continuing further, the transceiver unit [402] is configured to receive a
status associated with the one or more PS instances, wherein the status is at least
30 one of a running instance and a down instance. Further, the running instance (or the
running PS microservice instance) may refer to the instance that may be operational
and available to perform the designated one or more commands. Whereas the down
23
instance (or the down PS microservice instance) may refer to the instance that may
not be operational or not available to perform the designated one or more
commands.
5 [0075] Furthermore, the transceiver unit [402] receive, via the CLI, a set of target
configuration parameters based on an execution of the one or more commands.
Once the set of the target configuration parameters is received, the communication
unit [406] may display the set of target configuration parameters through the
command line interface (CLI), based on the execution of the one or more
10 commands.
[0076] Referring to FIG. 5, an exemplary signalling flow diagram [500] for
receiving a set of target configuration parameters, in accordance with exemplary
implementation of the present disclosure, is illustrated. In one example, the one or
15 more commands executed by the one or more PS microservices instances is
illustrated in the FIG. 5.
[0077] At step S1, the command line interface (CL) triggers the command to be
executed. The CL sends the command to a first platform scheduler (PS-1) for
20 execution.
[0078] The PS-1 executes the command sent by the CL. The command may be
executed by executing a business logic. As would be understood, the business logic
may determine how the one or more PS instances may execute the commands
25 received from the CL in a most efficient way.
[0079] In one example, to determine the business logic, say, a command was sent
to the PS-1 and the command was executed by the PS-1 in, say, time t1. In another
example, say, the said same command was sent again to the PS-1. This time a mode,
30 say, an error mode was applied at the PS-1 to execute the command. The mode may
be decided by a user and may be received by the PS-1 along with the command to
be executed. This time the same command was executed in, say, time t2. Also, the
24
time t1 taken by the PS-1 to execute the command was more than the time t2 taken
by the PS-1 to execute the said same command received from the CL. Therefore,
the business logic determined to execute the commands at the PS-1 may be
executing the commands using the error mode.
5
[0080] Further, it is to be understood that the above-mentioned examples to
determine the business logic and the example of the “error mode” are not intended
to limit the scope, applicability, configuration of the disclosure. The business logic
determined and the “error mode” mentioned is only exemplary and in no manner is
10 to be construed to limit the scope of the present subject matter. The business logic
and the modes may also include other examples, and such examples would also lie
within the scope of the present subject matter. Also, the above-mentioned
exemplary way to determine the business logic is not limited to the PS-1 only and
may be applied to other one or more PS instances to determine the business logic
15 for a respective PS instance.
[0081] Once, the command is executed, at step S2, the CL receives the response
form the PS-1 related to the successful execution of the command sent by the CL.
20 [0082] Further, at step S3, the CL again triggers the command to be executed by a
second platform scheduler (PS-2). The CL sends the command to the PS-2 for
execution by the PS-2.
[0083] The PS-2 executes the command received from the CL.
25
[0084] After execution of the command at PS-2, at step S4, the CL receives the
response form the PS-2 related to the successful execution of the command sent by
the CL.
30 [0085] Thereafter, the results received by the CL from the PS-1 and PS-2 are
displayed.
25
[0086] In another example the one or more commands are to be executed at only
one PS instance is illustrated in FIG. 4.
[0087] At step S5, the command line interface (CL) triggers the command to be
5 executed at PS-1 only. The CL sends the command to a platform scheduler (PS-1)
for execution.
[0088] The PS-1 executes the command sent by the CL.
10 [0089] Once, the command is executed, at step S2, the CL receives the response
form the PS-1 related to the successful execution of the command sent by the CL.
[0090] Thereafter, the result received by the CL from the PS-1 is displayed.
15 [0091] Continuing further, it may be determined, at the CL, whether the commands
may be executed parallelly on the one or more PS instances or at a single PS
instance at a time. Further, it may be determined on the basis of the certain
parameters such as, but not limited to, availability of resources, priority, etc.
20 [0092] In one example, say, the available resources are limited and may only be
used by a single PS instance at a time. In this case, the commands may be executed
on the basis of the priority of the commands. Say, the command at PS-1 has higher
priority, then the commands may be executed at PS-1 before the commands are
executed on the PS-2. After, the PS-1 executes the commands, thereafter, the
25 commands may be executed by the PS-2.
[0093] In another example, the available resources are enough to be used by the
one or more PS instances (i.e., the PS-1 and the PS-2). In this case, the commands
may be executed on the one or more PS instances (i.e., the PS-1 and the PS-2)
30 parallelly and the priority of the one or more instances may not be considered.
26
[0094] Further, it is to be understood that the above-mentioned examples to
determine if the command may be executed parallelly on the one or more PS
instances or at a single PS instance at a time are not intended to limit the scope,
applicability, configuration of the disclosure. The parameters to determine
5 mentioned are only exemplary and in no manner is to be construed to limit the scope
of the present subject matter. The parameters may also include other examples, and
such examples would also lie within the scope of the present subject matter.
[0095] Referring to FIG. 6, an exemplary flow diagram of a method [600] for
10 receiving a set of target configuration parameters, in accordance with exemplary
implementation of the present disclosure is illustrated. In an implementation, the
method [600] is performed by the system [400]. Also, as shown in FIG. 6, the
method [600] initiates at step [602].
15 [0096] At step [604], the method [600] comprises receiving, by a transceiver unit
[402] via a command line interface (CLI), a trigger comprising one or more
commands.
[0097] As would be understood, the command line interface (CLI) is a text-based
20 interface where a user may input commands that interact with a system's operating
system and receives an output based on the interaction between the user input and
the operating system. Further, the trigger is received, by the transceiver unit [402],
at a platform schedular microservice instance to initiate an action of receiving the
set of target configuration parameters, based on the one or more commands received
25 along with the trigger. The trigger may be received based on at least one of the user
inputs and a scheduled event.
[0098] Continuing further, the one or more commands comprise at least one
configuration parameter and one or more values associated with the configuration
30 parameter. Also, the one or more commands may at least one of an individual
commands and a series of commands that may be received along with the trigger to
perform specific tasks. Further, the command may be general command or a
27
microservice specific command. A microservice is a small, loosely coupled
distributed service and each microservice is designed to perform a specific function.
Further, each microservice may be developed and deployed independently. Further,
the microservice breaks a service into small and manageable components of
5 services.
[0099] Continuing further, the one or more values associated with the configuration
parameter may be a specific data or information that the configuration parameter
represents. Considering an example, if a configuration parameter is
10 "max_memory_usage," the value associated with it might be a numeric value like
"1024MB." The value determines the actual setting or limit that the parameter
enforces within the system.
[0100] Next at step [606], the method comprises loading, by the processing unit
15 [404] at a storage unit [408], the one or more commands. In one example, the
loading is a process where the processing unit [404] may retrieve the one or more
commands and may prepare the one or more commands for execution. The loading
of the one or more commands may involve reading or transferring the commands
from a storage unit [408] into a processing unit’s [404] memory or workspace so
20 that they can be processed. Further, in another example may include making the one
or more commands available to the processing unit [404] for the execution.
[0101] Further, at step [608], the method [600] comprises executing, by the
processing unit [404] at one or more platform scheduler (PS) microservice
25 instances, the one or more commands. In one example the PS microservice instance
acts as a centralized platform to schedule jobs for all microservices. In another
example the one or more PS microservice instances are used to create and schedule
jobs on behalf of the one or more microservices. The one or more microservices
may include microservices such as command execution management and
30 displaying configuration parameters.
28
[0102] Further, it is to be understood that the above-mentioned examples of the one
or more microservices are not intended to limit the scope, applicability,
configuration of the disclosure. The one or more microservices mentioned are only
exemplary and in no manner is to be construed to limit the scope of the present
5 subject matter. The one or more microservices may also include other examples and
such examples would also lie within the scope of the present subject matter.
[0103] Continuing further, the one or more commands are executed by the
processing unit [404], at the one or more PS microservice instances, via the
10 Command Line Interface (CLI). Further, the one or more commands are executed
via a respective PS microservice instance from the one or more PS microservice
instances. The respective PS microservice may be determined based on the specific
parameters such as, but not limited to, the command to be executed, the frequency
of execution, etc.
15
[0104] Continuing further, the transceiver unit [402] is further configured for
receiving a status associated with the one or more PS microservice instances. The
status is at least one of a running instance and a down instance. Further, the running
instance (or the running PS microservice instance) may refer to the instance that
20 may be operational and available to perform the designated one or more commands.
Whereas the down instance (or the down PS microservice instance) may refer to the
instance that may not be operational or not available to perform the designated one
or more commands.
25 [0105] Furthermore, at step [610], receiving, by the transceiver unit [402] via the
CLI, a set of target configuration parameters based on an execution of the set of
commands. Once the set of the target configuration parameters is received the
method further comprises displaying, by a communication unit [406], the set of
target configuration parameters through the command line interface (CLI), based
30 on the execution of the one or more commands.
[0106] Thereafter, the process ends at step [612].
29
[0107] Another aspect of the present disclosure may relate to a user equipment
(UE) for receiving a set of target configuration parameters. The UE comprises a
processor and a memory, coupled to the processor, to store instructions for the
5 processor for receiving a set of target configuration parameters. The processor may
receive the set of target configuration parameters based on receiving, by a
transceiver unit [402] via a command line interface (CLI), a trigger comprising one
or more commands. Further, the processor may receive the set of target
configuration parameters based on loading, by a processing unit [404] at a storage
10 unit [408], the one or more commands. The processor may further receive the set
of target configuration parameters based on executing, by the processing unit [404]
at one or more platform scheduler (PS) microservice instances, the set of
commands. Furthermore, the processor may receive the set of target configuration
parameters based on receiving, by the transceiver unit [402] via the CLI, a set of
15 target configuration parameters based on an execution of the set of commands.
[0108] The present disclosure further, discloses a non-transitory computer-readable
storage medium storing instruction for creating backup of a network function the
storage medium comprising executable code which, when executed by one or more
20 units of a system [400], causes a transceiver unit [402] of the system [400] to
receive, via a Command Line Interface (CLI), a trigger comprising one or more
commands. Further, the executable code when executed causes a processing unit
[404] of the system [400] to load, at a storage unit [408] of the system [400], the
one or more commands. Further, the executable code when executed causes the
25 processing unit [404] to execute, at one or more platform scheduler (PS)
microservice instances, the one or more commands. Furthermore, the executable
code when executed causes the transceiver unit [402] to receive, via the CLI, a set
of target configuration parameters based on an execution of the one or more
commands.
30
30
[0109] As is evident from the above, the present disclosure provides a technically
advanced solution for discovery of one or more peer network functions (NFs).
Further, the present solution displays configuration parameters of the microservice.
Also, the present solution sets the value of any configuration parameter mentioned
5 in the commands excel at runtime. Further, the present solution executes the
microservice commands either at a specific instance or all instances of the
microservice. Further, the present solution displays the status of all the instances of
the Microservice. The present solution further shows information of running
instance as well as down instance along with its down time. Furthermore, the
10 present solution gives information of the down instances of the microservice along
with result of the commands. Moreover, the present solution executes a specific
task whenever user wants.
[0110] While considerable emphasis has been placed herein on the disclosed
15 implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
20 and non-limiting.
[0111] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
25 particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
30 functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
31
We Claim:
1. A method for receiving a set of target configuration parameters, the method
comprising:
5 - receiving, by a transceiver unit [402] via a command line interface (CLI),
a trigger comprising one or more commands;
- loading, by a processing unit [404] at a storage unit [408], the one or
more commands;
- executing, by the processing unit [404] at one or more platform scheduler
10 (PS) microservice instances, the one or more commands; and
- receiving, by the transceiver unit [402] via the CLI, a set of target
configuration parameters based on an execution of the set of commands.
2. The method as claimed in claim 1, wherein the one or more commands
15 comprises at least one configuration parameter and one or more values
associated with the configuration parameter.
3. The method as claimed in claim 1, wherein the one or more commands are
executed by the processing unit [304], at the one or more PS instances, via the
20 Command Line Interface (CLI).
4. The method as claimed in claim 1, wherein the one or more PS instances are
used to create and schedule jobs on behalf of one or more microservices,
wherein the one or more microservices comprises at least one of: command
25 execution management and displaying configuration parameters.
5. The method as claimed in claim 1, wherein the one or more commands are
executed via a respective PS microservice instance from the one or more PS
instances.
30
32
6. The method as claimed in claim 1, further comprises receiving, by the
transceiver unit [402], a status associated with the one or more PS instances,
wherein the status is at least one of a running instance and a down instance.
5 7. The method as claimed in claim 1, further comprising: displaying, at a
communication unit [406], the set of target configuration parameters through
the command line interface (CLI), based on the execution of the one or more
commands.
10 8. A system for receiving a set of target configuration parameters, the system
comprises:
- at least a transceiver unit [402], wherein the transceiver unit [402] is
configured to:
o receive, via a Command Line Interface (CLI), a trigger comprising
15 one or more commands;
- at least a processing unit [404] connected with at least the transceiver unit
[402], wherein the processing unit [404] is configured to:
o load, at a storage unit [408], the one or more commands;
o execute, at one or more platform scheduler (PS) microservice
20 instances, the one or more commands; and
- the transceiver unit [402] is configured to:
o receive, via the CLI, a set of target configuration parameters based
on an execution of the one or more commands.
25 9. The system as claimed in claim 8, wherein the one or more commands comprise
at least one configuration parameter and one or more values associated with the
configuration parameter.
10. The system as claimed in claim 8, wherein the one or more commands are
30 executed by the processing unit [404], at the one or more PS instances, via the
Command Line Interface (CLI).
33
11. The system as claimed in claim 8, wherein the one or more PS instances are
used to create and schedule jobs on behalf of one or more microservices,
wherein the one or more microservices comprises at least one of: command
execution management and displaying configuration parameters.
5
12. The system as claimed in claim 8, wherein the one or more commands are
executed via a respective PS microservice instance from the one or more PS
instances.
10 13. The system as claimed in claim 8, wherein the one or more commands are
executed via a respective PS microservice instance from the one or more PS
instances.
14. The system as claimed in claim 8, wherein the transceiver unit [402] is further
15 configured to receive a status associated with the one or more PS instances,
wherein the status is at least one of a running instance and a down instance.
15. The system as claimed in claim 8, further comprises a communication unit [406]
configured to display the set of target configuration parameters through the
20 command line interface (CLI), based on the execution of the one or more
commands.
16. A user equipment (UE) comprising:
- a processor; and
25 - a memory coupled to the processor, wherein the memory stores
instructions for the processor to receive a set of target configuration
parameters based on:
o receiving, by a transceiver unit [402] via a command line interface
(CLI), a trigger comprising one or more commands;
30 o loading, by a processing unit [404] at a storage unit [408], the one or
more commands;
34
o executing, by the processing unit [404] at one or more platform
scheduler (PS) microservice instances, the set of commands; and
o receiving, by the transceiver unit [402] via the CLI, a set of target
configuration parameters based on an execution of the set of
5 commands.
| # | Name | Date |
|---|---|---|
| 1 | 202321061991-STATEMENT OF UNDERTAKING (FORM 3) [14-09-2023(online)].pdf | 2023-09-14 |
| 2 | 202321061991-PROVISIONAL SPECIFICATION [14-09-2023(online)].pdf | 2023-09-14 |
| 3 | 202321061991-POWER OF AUTHORITY [14-09-2023(online)].pdf | 2023-09-14 |
| 4 | 202321061991-FORM 1 [14-09-2023(online)].pdf | 2023-09-14 |
| 5 | 202321061991-FIGURE OF ABSTRACT [14-09-2023(online)].pdf | 2023-09-14 |
| 6 | 202321061991-DRAWINGS [14-09-2023(online)].pdf | 2023-09-14 |
| 7 | 202321061991-Proof of Right [11-01-2024(online)].pdf | 2024-01-11 |
| 8 | 202321061991-FORM-5 [13-09-2024(online)].pdf | 2024-09-13 |
| 9 | 202321061991-ENDORSEMENT BY INVENTORS [13-09-2024(online)].pdf | 2024-09-13 |
| 10 | 202321061991-DRAWING [13-09-2024(online)].pdf | 2024-09-13 |
| 11 | 202321061991-CORRESPONDENCE-OTHERS [13-09-2024(online)].pdf | 2024-09-13 |
| 12 | 202321061991-COMPLETE SPECIFICATION [13-09-2024(online)].pdf | 2024-09-13 |
| 13 | 202321061991-Request Letter-Correspondence [20-09-2024(online)].pdf | 2024-09-20 |
| 14 | 202321061991-Power of Attorney [20-09-2024(online)].pdf | 2024-09-20 |
| 15 | 202321061991-Form 1 (Submitted on date of filing) [20-09-2024(online)].pdf | 2024-09-20 |
| 16 | 202321061991-Covering Letter [20-09-2024(online)].pdf | 2024-09-20 |
| 17 | 202321061991-CERTIFIED COPIES TRANSMISSION TO IB [20-09-2024(online)].pdf | 2024-09-20 |
| 18 | 202321061991-FORM 3 [07-10-2024(online)].pdf | 2024-10-07 |
| 19 | Abstract.jpg | 2024-10-15 |
| 20 | 202321061991-ORIGINAL UR 6(1A) FORM 1 & 26-311224.pdf | 2025-01-04 |