Sign In to Follow Application
View All Documents & Correspondence

System And Method For Dynamic Slice Scheduling In A Network

Abstract: The present disclosure relates to a system (108) and method (500) for dynamic slice scheduling in a network comprising a processing engine (208) configured to determine a plurality of Virtual functions (VFs) and a plurality of resources in the network, split the determined plurality of VFs, map the plurality of split VFs and the plurality of associated resources inside at least one main container, receive at least one request for creating at least one new data plane slice, monitor by an agent interface (308) the at least one received request, create by the agent interface (308) at least one new data plane instance, a memory (204) configured to store the plurality of resources, an interface(s) (206) configured to communicate with the processing engine and a database (210) wherein the database (210) is configured to store the at least one new data plane. FIGURE 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
06 July 2023
Publication Number
42/2024
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-09-29
Renewal Date

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower 7, 15B, Beverly Park, Sec 4, Koper Khairane, Navi Mumbai - 400709, Maharashtra, India.
2. JHA, Adityakar
B1-305, G21 Avenue, Sector-83, Gurgaon, Haryana - 122 004, India.
3. RANJAN, Anu
S/O. Kalendra Kumar Singh, Vill - Ghanshyamchak, P.O.- Maheshpur, P.S.-Sanhoula, Dist- Bhagalpur - 813205, Bihar, India.
4. MALHOTRA, Pankaj
5/71 Subhash Nagar, New Delhi – 110027, India.
5. SENGUPTA, Swarup
I-1201A, The Coralwood, Sector 84, Gurgaon – 122004, Haryana, India.
6. MAMGAIN, Ranjan
Flat No. 35, Him Vihar Apartments, Plot No. 8, I. P. Extension, Patparganj, Delhi - 110092, India.
7. VASHISHTH, Yog
F-88A, FF, Sushant Lok 3, Sector 57, Gurgaon, Haryana - 122001, India.

Specification

FORM 2
THE PATENTS ACT, 1970 (39 of 1970) THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10; rule 13)
TITLE OF THE INVENTION
SYSTEM AND METHOD FOR DYNAMIC SLICE SCHEDULING IN A NETWORK
APPLICANT
JIO PLATFORMS LIMITED
of Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad -
380006, Gujarat, India; Nationality : India
The following specification particularly describes
the invention and the manner in which
it is to be performed

RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF INVENTION
[0002] The present disclosure generally relates to systems and methods for
dynamic data plane management in a wireless telecommunications network. More particularly, the present disclosure relates to a system and a method for dynamic slice scheduling in a network.
BACKGROUND OF THE INVENTION
[0003] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admission of the prior art.
[0004] Currently, data planes are pre-defined and pre-configured with a
resource provision. However, with the dynamic slicing requirements, the static behaviour of the data planes may result in one or more issues in the deployment of services (like the nonavailability of compute or network resources). Further, current systems include containerization with only required resources and mapping of one or more Virtual functions (VFs) in a single instance at the time of setup. This forces

a container creating processor to perform the same exercise at each request time and may add to an overall processing time.
[0005] There is, therefore, a need in the art to provide a system and a method
that can mitigate the problems associated with the prior arts.
OBJECTS OF THE INVENTION
[0006] It is an object of the present disclosure to provide a system and a
method that provides dynamic spinning of data planes based on a real-time slicing
requirement.
[0007] It is an object of the present disclosure to provide a system and a
method that provides mapping of Virtual functions (VFs), CPU cores and memory
at an initial stage of container creation, to be dynamically provisioned by a slice
scheduler in a later stage.
[0008] It is an object of the present disclosure to provide a system and a
method where a user plane function (UPF) slice scheduler microservice is
responsible for dynamically spinning a UPF data plane instance for a new slice
based on slice Service level agreement (SLA) requirements.
SUMMARY
[0009] In an exemplary embodiment, the present invention discloses a
method for dynamic slice scheduling in a network. The method comprises determining a plurality of Virtual functions (VFs) and a plurality of resources associated with the plurality of VFs in the network. The method comprises splitting the determined plurality of VFs based on an association between at least one VF with at least one Network interface card (NIC). The method comprises mapping the plurality of split VFs and the plurality of associated resources inside at least one main container. The method comprises receiving at least one request for creating at least one new data plane slice. The method comprises monitoring, by an agent interface, at least one received request. The method comprises creating, by the agent interface, at least one new data plane instance inside at least one main container for the at least one received request.

[0010] In some embodiments, the at least one new data plane instance is
created based on at least one Service level agreement (SLA) and a plurality of available resources inside the at least one main container.
[0011] In some embodiments, the agent interface is triggered to generate the
at least one new data plane instance inside the at least one main container using the plurality of available resources.
[0012] In some embodiments, the method further comprises a step of
spinning at least one new data plane instance based on a matching between a plurality of pre-defined parameters in at least one SLA and the plurality of available resources inside at least one container.
[0013] In some embodiments, the agent interface informs at least one slice
scheduler after the successful creation of the at least one new data plane instance inside the at least one main container.
[0014] In an exemplary embodiment, the present invention discloses a
system for dynamic slice scheduling in a network. A processing engine (208) is configured to determine a plurality of Virtual functions (VFs) and a plurality of resources associated with the plurality of VFs in the network. The system is configured to split the determined plurality of VFs based on an association between at least one VF with at least one network interface card (NIC). The system is configured to map the plurality of split VFs and the plurality of associated resources inside at least one main container. The system is configured to receive at least one request for creating at least one new data plane slice. The system is configured to monitor, by an agent interface, the at least one received request. The system is configured to create, by the agent interface, at least one new data plane instance inside the at least one main container for the at least one received request. The system further includes a memory configured to store the plurality of resources, an interface(s) configured to communicate with the processing engine, and a database coupled with the processing engine, wherein the database is configured to store the at least one new data plane.

[0015] In some embodiments, the at least one new data plane instance is
created based on at least one Service level agreement (SLA) and a plurality of available resources inside the at least one main container.
[0016] In some embodiments, the system is further configured to trigger the
agent interface to spawn the at least one new data plane instance inside the at least one main container using the plurality of available resources.
[0017] In some embodiments, the system is further configured to spin the at
least one new data plane instance based on a matching between a plurality of pre-defined parameters defined in the at least one SLA and the plurality of available resources inside the at least one container.
[0018] In some embodiments, the agent interface informs at least one slice
scheduler after the successful creation of the at least one new data plane instance inside the at least one main container.
[0019] In an embodiment, a computer program product comprising a non-
transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to determine a plurality of Virtual functions (VFs) and a plurality of resources associated with the plurality of VFs in the network, split the determined plurality of VFs based on an association between at least one VF with at least one network interface card (NIC), map the plurality of split VFs and the plurality of associated resources inside at least one main container, receive at least one request for creating at least one new data plane slice, monitor, by an agent interface, the at least one received request, create, by the agent interface, at least one new data plane instance inside the at least one main container for the at least one received request.
[0020] The present disclosure discloses a user equipment configured to
determining a plurality of Virtual functions (VFs) and a plurality of resources associated with the plurality of VFs in the network, splitting the determined plurality of VFs based on an association between at least one VF with at least one Network interface card (NIC), mapping the plurality of split VFs and the plurality of associated resources inside at least one main container, receiving at least one request for creating at least one new data plane slice, monitoring, by an agent

interface, the at least one received request, creating, by the agent interface, at least one new data plane instance inside the at least one main container for the at least one received request.
[0021] The foregoing general description of the illustrative embodiments
and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
BRIEF DESCRIPTION OF DRAWINGS
[0022] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the
disclosed methods and systems which like reference numerals refer to the same
parts throughout the different drawings. Components in the drawings are not
necessarily to scale, emphasis instead being placed upon clearly illustrating the
principles of the present disclosure. Some drawings may indicate the components
using block diagrams and may not represent the internal circuitry of each
component. It will be appreciated by those skilled in the art that disclosure of such
drawings includes the disclosure of electrical components, electronic components,
or circuitry commonly used to implement such components.
[0023] FIG. 1 illustrates an exemplary network architecture for
implementing a system for dynamic slice scheduling in a network, in accordance
with an embodiment of the present disclosure.
[0024] FIG. 2 illustrates a schematic block diagram of the system, in
accordance with an embodiment of the present disclosure.
[0025] FIG. 3 illustrates a schematic architecture diagram of a user plane
function (UPF), in accordance with embodiments of the present disclosure.
[0026] FIG. 4 illustrates a schematic flow diagram for a data plane slice
creation flow, in accordance with embodiments of the present disclosure.
[0027] FIG. 5 illustrates a flow diagram of a method for dynamic slice
scheduling in the network, in accordance with embodiments of the present
disclosure.

[0028] FIG. 6 illustrates an exemplary computer system in which or with
which the system and the method are implemented, in accordance with embodiments of the present disclosure.
[0029] The foregoing shall be more apparent from the following more
detailed description of the disclosure.
LIST OF REFERENCE NUMERALS
100 - Network architecture
102-1, 102-2…102-N - A plurality of users
104-1, 104-2…104-N - A plurality of computing devices
106 - Network
108 - System
202 - A plurality of processors
204 - Memory
206 - A plurality of interfaces
208 - Processing engine
210 - Database
212 - Data parameter engine
214 - Other engine(s)
300 - User plane function (UPF) slice scheduler
302 – Resource inventory
304 – High availability (HA) module
306 - Resource scheduler
308 - Agent interface
310 - Network services platform (NSP) interface
312 - Container management platform
400 – Flow diagram for the data plane slice creation
410 – Creation of all data plane containers at the time of cluster initiation, catering
to all available resources
412 – The agent interface running inside the UPF slice scheduler may validate the
request

414 – If the request is invalid, it is sent to the NSP interface
416 – The agent interface may register with the slice scheduler with all the available
resources to calculate resource requirements
418 – If the resource is not available, it is sent to the NSP interface
420 – The UPF slice scheduler may include all the resources available in each host
running on multiple containers
422 – When the slice creation request is received, the NSP interface may check the
resources available in the containers and the SLAs to meet and trigger the agent
interface to spin a data plane instance inside the container using the available
resources
424 – After the data plane instance is created, the agent interface may inform the
slice scheduler (404) of its successful creation, and the slice scheduler may update
the resource inventory
426 – Add the SA group along with details of the UE internet protocol (IP) subnet,
Data network name (DNN), slice Identification (ID), Network slice instance (NSI)
ID, etc
428, 430 – If step 426 is successful send to the UPF slice scheduler and further to
the NSP interface
500 – Method
502 – Determining a plurality of Virtual functions (VFs) and a plurality of resources
associated with the plurality of VFs in the network
504 – Splitting the determined plurality of VFs based on a correlation between at
least one VF with at least one network interface card (NIC)
506 – Mapping the plurality of split VFs and the plurality of associated resources
inside at least one main container
508 – Receiving at least one request for creating at least one new data plane slice
510 – Monitoring, by an agent interface, the at least one received request
512 – Creating, by the agent interface, at least one new data plane instance inside
the at least one main container for the at least one received request
600 - Computer system
610 – External storage device

620 – Bus
630 – Main memory
640 – Read-only memory
650 – Mass storage device
660 – Communication port
670 – Processor
DETAILED DESCRIPTION
[0030] In the following description, for explanation, various specific details
are outlined in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0031] The ensuing description provides exemplary embodiments only and
is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0032] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known

circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail to avoid obscuring the embodiments.
[0033] Also, it is noted that individual embodiments may be described as a
process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0034] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other elements.
[0035] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.

[0036] The terminology used herein is to describe particular embodiments
only and is not intended to be limiting the disclosure. As used herein, the singular
forms “a”, “an”, and “the” are intended to include the plural forms as well, unless
the context indicates otherwise. It will be further understood that the terms
5 “comprises” and/or “comprising,” when used in this specification, specify the
presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any combinations of one or more of the
10 associated listed items.
[0037] The present invention provides a system and a method that provides
a slice scheduler and a method of spinning the data plane to enable optimal utilization of resources. The present invention provides a system and a method that provides resource optimization, quick deployment, negligible down time, and
15 resource isolation. In an aspect, the present invention can be implemented in a
communication network for network deployment and management of resources.
[0038] The various embodiments throughout the disclosure will be
explained in more detail with reference to FIGs. 1-6.
[0039] FIG. 1 illustrates an example of a network architecture (100) for
20 implementing a system (108) for dynamic slice scheduling in a network (106), in
accordance with an embodiment of the present disclosure.
[0040] As illustrated in FIG. 1, one or more computing devices (104-1, 104-
2…104-N) may be connected to the system (108) through the network (106). A person of ordinary skill in the art will understand that the one or more computing
25 devices (104-1, 104-2…104-N) may be collectively referred as ‘the computing
devices (104)’ and individually referred as ’the computing device (104)’. One or more users (102-1, 102-2…102-N) may provide one or more requests to the system (108). A person of ordinary skill in the art will understand that the one or more users (102-1, 102-2…102-N) may be collectively referred as ‘the users (102)’ and
30 individually referred as ’the user (102)’. Further, the computing devices (104) may
11

also be referred as ‘the user equipment (UE) (104) or ‘the UEs (104)’ throughout the disclosure.
[0041] In an embodiment, the computing device (104) may include, but not
be limited to, a mobile, and a laptop. Further, the computing device (104) may
5 include one or more in-built accessories or externally coupled accessories,
including, but not limited to, a visual aid device such as a camera, audio aid, microphone, or keyboard. Furthermore, the computing device (104) may include a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, a general-purpose computer, a desktop, a personal digital assistant, a tablet computer,
10 and a mainframe computer. Additionally, input devices for receiving input from the
user (102), such as a touchpad, touch-enabled screen, electronic pen, and the like, may be used.
[0042] In an embodiment, the network (106) may include, by way of
example but not limitation, at least a portion of one or more networks having one
15 or more nodes that transmit, receive, forward, generate, buffer, store, route, switch,
process, or a combination thereof, one or more messages, packets, signals, waves, voltage or current levels, and some combinations thereof. The network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private
20 network, a packet-switched network, a circuit-switched network, an ad hoc
network, an infrastructure network, a public-switched telephone network (PSTN),
a cable network, a cellular network, a satellite network, a fiber optic network, or
some combinations thereof.
[0043] FIG. 2 illustrates a schematic block diagram (200) of the system
25 (108), in accordance with an embodiment of the present disclosure.
[0044] Referring to FIG. 2, the system (108) may include one or more
processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that
30 process data based on operational instructions. Among other capabilities, the one
or more processor(s) (202) may be configured to fetch and execute computer-
12

readable instructions stored in a memory (204) of the system (108). The memory
(204) may be configured to store one or more computer-readable instructions or
routines in a non-transitory computer-readable storage medium, which may be
fetched and executed to create or share data packets over a network service. The
5 memory (204) may comprise any non-transitory storage device including, for
example, volatile memory such as random-access memory (RAM), or non-volatile memory such as erasable programmable read-only memory (EPROM), flash memory, and the like.
[0045] In an embodiment, the system (108) may include one or more
10 interface(s) (206). The interface(s) (206) may comprise a variety of interfaces, for
example, interfaces for data input and output devices (I/O), storage devices, and the
like. The interface(s) (206) may facilitate communication through the system (108).
The interface(s) (206) may also provide a communication pathway for one or more
components of the system (108). Examples of such components include but are not
15 limited to, processing engine(s) (208) and a database (210). Further, the processing
engine(s) (208) may include a data parameter engine (212) and other engine(s)
(214). In an embodiment, the other engine(s) (214) may include, but not be limited
to, a data ingestion engine, an input/output engine, and a notification engine.
[0046] In an embodiment, the processing engine(s) (208) may be
20 implemented as a combination of hardware and programming (for example,
programmable instructions) to implement one or more functionalities of the
processing engine(s) (208). In the examples described herein, such combinations of
hardware and programming may be implemented in several different ways. For
example, the programming for the processing engine(s) (208) may be processor-
25 executable instructions stored on a non-transitory machine-readable storage
medium, and the hardware for the processing engine(s) (208) may comprise a
processing resource (for example, one or more processors), to execute such
instructions. In the present examples, the machine-readable storage medium may
store instructions that, when executed by the processing resource, implement the
30 processing engine(s) (208). In such examples, the system (108) may comprise the
machine-readable storage medium storing the instructions and the processing
13

resource to execute the instructions, or the machine-readable storage medium may
be separate but accessible to the system (108) and the processing resource. In other
examples, the processing engine(s) (208) may be implemented by electronic
circuitry.
5 [0047] Although FIG. 2 shows exemplary components of the system (108),
in other embodiments, the system (108) may include fewer components, different components, differently arranged components, or additional functional components. Additionally, or alternatively, one or more components of the system (108) may perform functions described as being performed by one or more other
10 components of the system (108).
[0048] FIG. 3 illustrates a schematic architecture diagram (300) for a user
plane function (UPF) slice scheduler (300) provided in the processing engine (208) for managing data plane slices, in accordance with embodiments of the present disclosure.
15 [0049] As illustrated in FIG. 3, the UPF slice scheduler (300) comprises a
resource inventory (302) and a High Availability (HA) module (304) connected to a resource scheduler (306). Further, an agent interface (308), a network service platform (NSP) interface (310) and a container management platform (312) are connected to the resource scheduler (306).
20 [0050] The UPF slice scheduler (300) may maintain the resource inventory
(302) of resources for all UPF clusters at the edge/site/circle level. The resource
inventory (302) includes details such as processing power (vCPUs), memory, and
network interface cards (NICs) within each container.
[0051] In an embodiment, the HA module (304) may maintain the high
25 availability of the resource scheduler (306), ensuring that it consistently performs
its tasks without interruption.
[0052] The UPF slice scheduler (300) may include an application
programming interface (API) for the creation and deletion of a slice. For each slice creation request, the UPF slice scheduler (300) may identify the most suitable UPF
30 cluster with sufficient resources available (as per service level agreement (SLA)
requirement) to create data plane (DP) instances. Further, the resource inventory
14

(302) is configured to identify the most suitable UPF cluster with sufficient
resources to meet the requested service level agreement (SLA) requirements. The
resource scheduler (306) may compute the number of resources required based on
the throughput requirement for the slice.
5 [0053] In an embodiment, the SLA is a formal agreement between a service
provider (like a network operator) and a customer (like another company or individual user) that defines the expected performance of a service. Further, a plurality of pre-defined parameters in the at least one SLA includes metrics related to the data plane, such as: throughput, latency or availability.
10 [0054] In an embodiment, the plurality of available resources may refer to
the various types of resources available within a single “main container.” The
resources may include, but not limited to virtual CPUs (vCPUs), memory, Network
interface cards (NICs), Virtual functions (VFs)
[0055] In an embodiment, the agent interface (308) may facilitate
15 communication between the resource scheduler (306) and slice agents operating on
data plane servers. The agent interface (308) executes commands to create, monitor, and delete data plane instances based on the instructions of the resource scheduler (306). The agent interface (308) ensures seamless resource management and instance control interaction.
20 [0056] In an embodiment, the NSP interface (310) may enable the UPF slice
scheduler (300) to interact with the broader network management system. The NSP interface (310) handles requests for creating and deleting slices and provides status updates on resource usage and slice operations. The NSP interface (310) ensures that the UPF slice scheduler (300) aligns with the network’s overall management
25 policies and service requirements.
[0057] In an embodiment, the container management platform (312) may
be connected with the resource scheduler (306). The container management platform (312) may manage the addition and deletion of service availability (SA) groups, along with details such as UE IP subnet, data network name (DNN), slice
30 identification (ID), and network slice instance (NSI) ID.
15

[0058] In an embodiment, for each slice, one service availability (SA) group
may be created. Each SA group may include a pair of DP instances in an
active/standby configuration. Further, each slice may communicate with the agent
interface (308) on each host for spinning of DP instances with given resources. The
5 slice may communicate with the container management platform (312) for the
addition of SA group along with details of UE IP subnet, DNN, slice ID, NSI ID
etc.
[0059] In an embodiment,
1. the UPF slice scheduler (300) may communicate with the agent interface
10 (308) running (within the container) on data plane servers to spin data plane
processes with given:
a. vCPUs (specific vCPU range)
b. Memory (4K Memory)
c. Number of Hugepages
15 d. VF-Id
e. Slice Id
2. The agent interface (308) for slice deletion,
a. Slice Id
3. UCM, for addition of SA Group with details,
20 a. UE IP Subnet,
b. DNN,
c. Slice Id
d. NSI Id (Optional)
e. VLAN Id
25 4. UCM, for deletion of SA group,
a. Slice Id
b. NSI Id (Optional)
UCM shall remove the SA group from the configuration.
5. Northbound RestFul API for slice creation,
30 a. UE IP Subnet
b. Overall Throughput (In terms of Gbps)
16

c. VLAN Id
d. DNN
e. Slice-Id
f. NSI Id
5 6. Northbound RestFul interface for Slice Deletion,
a. Slice-Id
Inventory requirements
1. The resource inventory (302) may maintain an inventory of all UPF clusters
deployed at edge/site/circle. The fully qualified cluster ID may include a
10 circle-site-cluster number to uniquely identify the cluster.
2. For each cluster, details/addresses of individual server along with type e.g. CP/DP shall be maintained.
3. For each data plane server, the slice may maintain an inventory of following resources in terms of total, allocated and available.
15 a. Virtual central processing unit (vCPU) ID’s (List)
b. 4K Memory (random access memory (RAM)/Heap)
c. Number of HugePages
d. VF Ids (List)
4. For each slice, the resource inventory (302) may contain list of allocated
20 resources on each server with details of cluster identification.
Inventory RESTFul APIs (For Northbound)

25
30

1. Add resource information with respect to (w.r.t.) Cluster/DP
2. Get Allocated resource information w.r.t.
a. Slice-Id
b. Cluster
c. Cluster + Data Plane
3. Get total resource information w.r.t.
a. Circle
b. Site
c. Cluster
d. DP

17

4. Get free resource information w.r.t.
a. Circle
b. Site
c. Cluster
5 d. DP
[0060] FIG. 4 shows a schematic flow diagram (400) for a data plane
illustrating a slice creation flow, in accordance with embodiments of the present
disclosure.
[0061] As illustrated in FIG. 4, in an embodiment, resource isolation may
10 be required in each created slice. This is required for isolating resource
management. Data plane instances may be implemented where inventory
management may be required based on the SLA requirement.
[0062] In an embodiment, the flow diagram (400) for the data plane slice
creation may include the following steps.
15 [0063] Step 410: Creation of all data plane containers at the time of cluster
initiation, catering to all available resources.
[0064] Step 412: The agent interface (308) running inside the UPF slice
scheduler (300) may validate the request.
[0065] Step 414: If the request is invalid, it is sent to the NSP interface
20 (310).
[0066] Step 416: The agent interface (308) may register with the UPF slice
scheduler (300) with all the available resources to calculate resource requirements.
[0067] Step 418: If the resource is unavailable, it is sent to the NSP interface
(310).
25 [0068] Step 420: The UPF slice scheduler (300) may include all the
resources available in each host running on multiple containers.
[0069] Step 422: When the slice creation request is received, the NSP
interface (310) may check the resources available in the containers and the SLAs to
meet and trigger the agent interface (308) to spin a data plane instance inside the
30 container using the available resources.
18

[0070] Step 424: After the data plane instance is created, the agent interface
(308) may inform the UPF slice scheduler (300) of its successful creation, and the UPF slice scheduler (300) may update the resource inventory (302).
[0071] Step 426: Add the SA group along with details of the UE internet
5 protocol (IP) subnet, DNN, slice ID, NSI ID, etc.
[0072] Steps 428, 430: If step 426 is successful send to the UPF slice
scheduler (300) and further to the NSP interface (310).
[0073] FIG. 5 illustrates a schematic flow diagram for a method (500) for
dynamic slice scheduling in the network (106), in accordance with an embodiment
10 of the present disclosure.
[0074] As illustrated in FIG. 5, the following steps of the method (500) may
be implemented by the system (108) for dynamic slice scheduling in the network
(106).
[0075] At step 502, the method (500) may be configured to determine a
15 plurality of Virtual Functions (VFs) available in the network (106). The VFs are
essentially software components that provide specific network functionalities (like forwarding packets). The method (500) may also be configured to identify a plurality of resources associated with these VFs. These resources likely include processing power (vCPUs), memory, and network interface cards (NICs) that the
20 VFs rely on to function.
[0076] At step 504, the method (500) may be configured to split the
determined plurality of VFs based on a correlation between at least one VF with at least one network interface card (NIC). This involves segmenting the identified VFs into distinct units based on their association with specific NICs, ensuring that each
25 VF is optimally aligned with the hardware resources available.
[0077] At step 506, the method (500) may be configured to map the
plurality of split VFs and the plurality of associated resources inside at least one main container. This step entails organizing and allocating the split VFs and their resources within a primary container framework, facilitating efficient resource
30 management and utilization.
19

[0078] At step 508, the method (500) may be configured to receive at least
one request for creating at least one new data plane slice. This involves handling
incoming requests for the creation of new data plane slices, which are essential for
supporting different network services and applications.
5 [0079] At step 510, the method (500) may be configured to monitor, by at
least one the agent interface (308), the at least one received request. In this step, the
agent interface (308) keeps track of the received requests, ensuring they are
processed in a timely and accurate manner.
[0080] At step 512, the method (500) may be configured to create, by the at
10 least the agent interface (308), at least one new data plane instance inside the at
least one main container for the at least one received request. This final step
involves the actual instantiation of the data plane slice within the main container,
using the resources mapped earlier to fulfil the request.
[0081] In an embodiment, the creation considers the requested Service
15 Level Agreement (SLA) and the available resources within the container. The SLA
specifies performance requirements like throughput and latency, and the available resources determine if the container can meet those demands.
[0082] In an embodiment, the agent interface (308) is triggered to generate
the data plane instance using the available resources within the container. This
20 suggests an external mechanism might initiate the creation process based on the
received request and identified resources.
[0083] In an embodiment, the creation process involves spinning the data
plane instance based on a matching between the pre-defined parameters in the SLA (like throughput) and the available resources within the container. This ensures the
25 created instance has sufficient resources to meet the promised service levels.
[0084] In an embodiment, after successful creation, the agent interface
(308) informs the UPF slice scheduler (300).
[0085] FIG. 6 illustrates an exemplary computer system (600) in which or
with which the system (108) and the method (500) of the present disclosure may be
30 implemented, in accordance with an embodiment of the present disclosure.
20

[0086] As shown in FIG. 6, the computer system (600) may include an
external storage device (610), a bus (620), a main memory (630), a read-only
memory (640), a mass storage device (650), a communication port(s) (660), and a
processor (670). A person skilled in the art will appreciate that the computer system
5 (600) may include more than one processor and communication ports. The
processor (670) may include various modules associated with embodiments of the present disclosure. The communication port(s) (660) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other
10 existing or future ports. The communication ports(s) (660) may be chosen
depending on a network, such as a Local Area Network (LAN), Wide Area Network
(WAN), or any network to which the computer system (600) connects.
[0087] In an embodiment, the main memory (630) may be Random Access
Memory (RAM), or any other dynamic storage device commonly known in the art.
15 The read-only memory (540) may be any static storage device(s) e.g., but not
limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (670). The mass storage device (650) may be any current or future mass storage solution, which can be used to store information and/or instructions.
20 Exemplary mass storage solutions include, but are not limited to, Parallel Advanced
Technology Attachment (PATA) or Serial Advanced Technology Attachment
(SATA) hard disk drives or solid-state drives (internal or external, e.g., having
Universal Serial Bus (USB) and/or Firewire interfaces).
[0088] In an embodiment, the bus (620) may communicatively couple the
25 processor(s) (670) with the other memory, storage, and communication blocks. The
bus (620) may be, e.g. a Peripheral Component Interconnect PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (670)
30 to the computer system (600).
21

[0089] In another embodiment, operator and administrative interfaces, e.g.,
a display, keyboard, and cursor control device may also be coupled to the bus (620)
to support direct operator interaction with the computer system (600). Other
operator and administrative interfaces can be provided through network
5 connections connected through the communication port(s) (660). The components
described above are meant only to exemplify various possibilities. In no way should
the aforementioned exemplary computer system (600) limit the scope of the present
disclosure.
[0090] While considerable emphasis has been placed herein on the preferred
10 embodiments, it will be appreciated that many embodiments can be made and that
many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing
15 descriptive matter is to be implemented merely as illustrative of the disclosure and
not as a limitation.
ADVANTAGES OF THE INVENTION
[0091] The present disclosure provides a system and a method that splits
20 multiple virtual functions (VFs) associated with a network interface card (NIC) and
maps all the VFs inside the main container.
[0092] The present disclosure provides a system, and a method that creates
a new data plane (DP) instance inside the main container based on a service level
agreement (SLA) and available resources within the container.
25 [0093] The present disclosure provides a system and a method that enables
optimal utilization of resources.
[0094] The present disclosure provides a system and a method that provides
resource optimization, quick deployment, negligible downtime, and resource
isolation.
30
22

WE CLAIM:
1. A method (500) for dynamic slice scheduling in a network, the method (500)
comprising:
determining (502) a plurality of Virtual functions (VFs) and a plurality of resources associated with the plurality of VFs in the network;
splitting (504) the determined plurality of VFs based on a correlation between at least one VF with at least one Network interface card (NIC);
mapping (506) the plurality of split VFs and the plurality of associated resources inside at least one main container;
receiving (508) at least one request for creating at least one new data plane slice;
monitoring (510), by an agent interface (308), the at least one received request; and
creating (512), by the agent interface (308), at least one new data plane instance inside the at least one main container for the at least one received request.
2. The method (500) as claimed in claim 1, wherein the at least one new data
plane instance is created based on at least one Service level agreement (SLA) and a plurality of available resources inside the at least one main container.
3. The method (500) as claimed in claim 1, wherein the agent interface (308) is
triggered to generate the at least one new data plane instance inside the at least one main container using the plurality of available resources.

4. The method (500) as claimed in claim 2, further comprising spinning the at
least one new data plane instance based on a matching between a plurality of pre-defined parameters in the at least one SLA and the plurality of available resources inside the at least one main container.
5. The method (500) as claimed in claim 1, wherein the agent interface (308)
informs at least one slice scheduler (300) after of a successful creation of the at least one new data plane instance inside the at least one main container.
6. A system (200) for dynamic slice scheduling in a network, the system (200)
comprises:
a processing engine (208) configured to:
determine a plurality of Virtual functions (VFs) and a plurality of resources associated with the plurality of VFs in the network;
split the determined plurality of VFs based on an association between at least one VF with at least one Network interface card (NIC);
map the plurality of split VFs and the plurality of associated resources inside at least one main container;
receive at least one request for creating at least one new data plane slice;
monitor, by an agent interface (308), the at least one received request;
create, by the agent interface (308), at least one new data plane instance inside the at least one main container for the at least one received request;
a memory (204) configured to store the plurality of resources;

an interface(s) (206) configured to communicate with the processing engine; and
a database (210) coupled with the processing engine (208), wherein the database (210) is configured to store the at least one new data plane.
7. The system (200) as claimed in claim 6, wherein the at least one new data
plane instance is created based on at least one Service level agreement (SLA) and a plurality of available resources inside the at least one main container.
8. The system (200) as claimed in claim 6, wherein the agent interface (308) is
triggered to generate the at least one new data plane instance inside the at least one main container using the plurality of available resources.
9. The system (200) as claimed in claim 7, wherein further comprising spinning
the at least one new data plane instance based on a matching between a plurality of pre-defined parameters in the at least one SLA and the plurality of available resources inside the at least one main container.
10. The system (200) as claimed in claim 6, wherein the agent interface (308) informs at least one slice scheduler (300) after of a successful creation of the at least one new data plane instance inside the at least one main container.
11. A user equipment (104) communicatively coupled with a network (106), the coupling comprises steps of:
determining a plurality of Virtual functions (VFs) and a plurality of resources associated with the plurality of VFs in the network;
splitting the determined plurality of VFs based on a correlation between at least one VF with at least one Network interface card (NIC);

mapping the plurality of split VFs and the plurality of associated resources inside at least one main container;
receiving at least one request for creating at least one new data plane slice;
monitoring, by an agent interface (308), the at least one received request; and
creating, by the agent interface (308), at least one new data plane instance inside the at least one main container for the at least one received request.

Documents

Application Documents

# Name Date
1 202321045358-STATEMENT OF UNDERTAKING (FORM 3) [06-07-2023(online)].pdf 2023-07-06
2 202321045358-PROVISIONAL SPECIFICATION [06-07-2023(online)].pdf 2023-07-06
3 202321045358-FORM 1 [06-07-2023(online)].pdf 2023-07-06
4 202321045358-DRAWINGS [06-07-2023(online)].pdf 2023-07-06
5 202321045358-DECLARATION OF INVENTORSHIP (FORM 5) [06-07-2023(online)].pdf 2023-07-06
6 202321045358-FORM-26 [13-09-2023(online)].pdf 2023-09-13
7 202321045358-FORM-26 [05-03-2024(online)].pdf 2024-03-05
8 202321045358-FORM 13 [08-03-2024(online)].pdf 2024-03-08
9 202321045358-AMENDED DOCUMENTS [08-03-2024(online)].pdf 2024-03-08
10 202321045358-Request Letter-Correspondence [03-06-2024(online)].pdf 2024-06-03
11 202321045358-Power of Attorney [03-06-2024(online)].pdf 2024-06-03
12 202321045358-Covering Letter [03-06-2024(online)].pdf 2024-06-03
13 202321045358-ENDORSEMENT BY INVENTORS [13-06-2024(online)].pdf 2024-06-13
14 202321045358-DRAWING [13-06-2024(online)].pdf 2024-06-13
15 202321045358-CORRESPONDENCE-OTHERS [13-06-2024(online)].pdf 2024-06-13
16 202321045358-COMPLETE SPECIFICATION [13-06-2024(online)].pdf 2024-06-13
17 202321045358-CORRESPONDANCE-WIPO CERTIFICATE-14-06-2024.pdf 2024-06-14
18 Abstract1.jpg 2024-07-12
19 202321045358-ORIGINAL UR 6(1A) FORM 26-020924.pdf 2024-09-09
20 202321045358-FORM-9 [16-10-2024(online)].pdf 2024-10-16
21 202321045358-FORM 18A [17-10-2024(online)].pdf 2024-10-17
22 202321045358-FORM 3 [07-11-2024(online)].pdf 2024-11-07
23 202321045358-FER.pdf 2024-12-02
24 202321045358-FORM 3 [23-01-2025(online)].pdf 2025-01-23
25 202321045358-FORM 3 [23-01-2025(online)]-1.pdf 2025-01-23
26 202321045358-Proof of Right [11-04-2025(online)].pdf 2025-04-11
27 202321045358-OTHERS [11-04-2025(online)].pdf 2025-04-11
28 202321045358-MARKED COPY [11-04-2025(online)].pdf 2025-04-11
29 202321045358-FER_SER_REPLY [11-04-2025(online)].pdf 2025-04-11
30 202321045358-DRAWING [11-04-2025(online)].pdf 2025-04-11
31 202321045358-CORRECTED PAGES [11-04-2025(online)].pdf 2025-04-11
32 202321045358-CLAIMS [11-04-2025(online)].pdf 2025-04-11
33 202321045358-ORIGINAL UR 6(1A) FORM 1-170425.pdf 2025-04-21
34 202321045358-US(14)-HearingNotice-(HearingDate-12-09-2025).pdf 2025-09-01
35 202321045358-FORM-26 [09-09-2025(online)].pdf 2025-09-09
36 202321045358-Correspondence to notify the Controller [09-09-2025(online)].pdf 2025-09-09
37 202321045358-Written submissions and relevant documents [26-09-2025(online)].pdf 2025-09-26
38 202321045358-Retyped Pages under Rule 14(1) [26-09-2025(online)].pdf 2025-09-26
39 202321045358-FORM-26 [26-09-2025(online)].pdf 2025-09-26
40 202321045358-Annexure [26-09-2025(online)].pdf 2025-09-26
41 202321045358-2. Marked Copy under Rule 14(2) [26-09-2025(online)].pdf 2025-09-26
42 202321045358-PatentCertificate29-09-2025.pdf 2025-09-29
43 202321045358-IntimationOfGrant29-09-2025.pdf 2025-09-29
44 202321045358-ORIGINAL UR 6(1A) FORM 26-300925.pdf 2025-10-09

Search Strategy

1 SearchstrategyE_26-11-2024.pdf
2 202321045358_SearchStrategyAmended_E_SearchHistoryAE_29-08-2025.pdf

ERegister / Renewals