Sign In to Follow Application
View All Documents & Correspondence

System And Method For Catering Services Associated With Subscriber

Abstract: ABSTRACT SYSTEM AND METHOD FOR CATERING SERVICES ASSOCIATED WITH SUBSCRIBER [0001] A routing system (200) and method for catering a plurality of services associated with a subscriber is disclosed. The routing system (200) includes an edge policy charging control (PCC) (100) deployed at an edge location. The edge PCC includes an interface for receiving a subscription request from the subscriber, wherein the subscription request is associated with the plurality of services; a compute unit (160) for splitting the plurality of services based on latency requirement and usage requirement of the plurality of services and a routing unit (170) for routing the plurality of services between an edge compute and a core compute. FIG. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
22 March 2021
Publication Number
51/2022
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
vaibhav.khanna@sterlite.com
Parent Application

Applicants

STERLITE TECHNOLOGIES LIMITED
STERLITE TECHNOLOGIES LIMITED, IFFCO Tower, 3rd Floor, Plot No.3, Sector 29, Gurgaon 122002, Haryana, India

Inventors

1. Sumit Sati
3rd Floor, Plot No. 3, IFFCO Tower, Sector 29, Gurugram, Haryana - 122002
2. Aditya Shrivastava
3rd Floor, Plot No. 3, IFFCO Tower, Sector 29, Gurugram, Haryana - 122002

Specification

TECHNICAL FIELD
[0001] The present disclosure relates to policy and charging control applications, and more particularly, relates to a system and a method for catering services associated with a subscriber.
BACKGROUND
[0002] In general, each service provider (for example, streaming platforms, content/media providers, telecom service providers, or the like) that are capable of providing multiple services (for example, stream live match at the same time recharge mobile connection, live match on 5G rate and user requested to load emails on 3G rate, and the like) are deployed over multiple servers across several data centers globally. Services are typically replicated in each data center, in order to improve fault tolerance, limit inter-site traffic and guarantee low latencies to end users.
[0003] While a client is running, it maintains a long network connection to access point of each service provider. The access point acts a reverse proxy. Its role is to demultiplex all traffic and distribute requests to the appropriate service replicas. Essentially, the access points act as the single point of access to all the services provided by the service provider. Once the client establishes a session with the access point, it needs to communicate with the database to access the services such as authentication, storage of files and related metadata, and so on.
[0004] The client never communicates with the database of the service provider that contains the services directly, thus the access point it connects to is responsible for proxying the client's requests to the appropriate replica of each service. At this stage, for each service request a routing decision has to be made. Once, the service request inflow increases, the load of the database increases since all the requests are navigating to the same database. Therefore, latency insensitive traffic delays the sensitive traffic. There are two reasons why latency is an appealing load index: in the first place it has a significant impact on the experience of users/subscribers of the service providers, in the second place, identifying a good load index in a distributed system presents significant

challenges due to phenomena that might arise from the interaction of the different system components such as multi-bottlenecks.
[0005] For example, a prior art reference "US8745239B2" discloses an Edge-based resource spin-up for cloud computing. In this prior art, the choice/tagging of requests may be based on both proximity/locality and the current utilization level of that current proximal location, based on both, response to the request being sent to the user.
[0006] Further, a prior art reference "US7359955B2" discloses Metadata enabled push-pull model for efficient low-latency video-content distribution over a network. This priori art relates broadly to computer networks and streaming media objects delivered over computer networks. It relates to efficient techniques, using metadata associated with content, for making copies of content available at various locations inside multiple computer networks in order to provide better quality of service for delivering streaming media objects.
[0007] Further, a prior art reference "US20140379866A1" discloses a server center for hosting low-latency streaming interactive audio/video (A/V).
[0008] Furthermore, a prior art reference "US20160072669A1" discloses one long haul network path carried over the at least one network, the at least one long haul network path including the virtual network overlay. The system may include at least one network server component configured to connect to the client site network component using the bonded/aggregated connection, the network server component including at least one concentrator element implemented at a network access point to at least one network, the network server component automatically terminating the bonded/aggregated connection and passing data traffic to the network access point to the at least one network.
[0009] In light of above discussion and in consideration with prior-arts, there exists a need for improved techniques that can be used to reduce the load of the database by effectively managing the routing of the received request.
[0010] Any references to methods, apparatus or documents of the prior art are not to be taken as constituting any evidence or admission that they formed, or form part of the common general knowledge.

OBJECT OF THE DISCLOSURE
[0011] A primary object of the present disclosure is to provide a routing system and method for catering a plurality of services associated with a subscriber.
[0012] Another object of the present disclosure is to provide a technique to sense and split a subscription request automatically based on usage and latency.
[0013] Another object of the present disclosure is to effectively route a request for getting a faster response by reducing database load significantly.
SUMMARY
[0014] The present disclosure discloses a routing method and system of splitting plurality of services associated with a subscriber based on latency and usage requirement. The services are requests that are initiated by a subscriber for accessing any offer and package as a subscription request at a service level. An edge PCC (Policy and Charging Control) receives the request and routes the requested services towards an edge compute and a core compute based on latency and usage requirements. The edge PCC is an application that enables operators to dynamically control network resources with real-time policies based on service, subscriber, usage, and latency. The offers are received by the subscriber as per the subscriber's interest as the subscriber can opt for any of the offers and services. After receiving a request from the subscriber, any offer can be categorized into four categories based on parameters such as usage and latency through the application. The categories can be defined as High Usage at High Latency/ Low Usage at High Latency/ High Usage at Low Latency /Low Usage at Low Latency. In further consideration, the present disclosure discloses a method that can be used to determine the offer feature as it is latency-sensitive or insensitive. Based on the feature behavior application, the edge PCC routes the latency-sensitive offers to the edge database and latency-Insensitive offers to the core database.

[0015] Other and further aspects and features of the disclosure will be evident from reading the following detailed description of the embodiments, which are intended to illustrate, not limit, the present disclosure.
BRIEF DESCRIPTION OF FIGURES
[0016] Having thus described the disclosure in general terms, reference will now be made to the accompanying figures, wherein:
[0017] FIG. 1 is a schematic diagram illustrating an outline of an edge PCC in accordance with the present disclosure.
[0018] FIG. 2 is a schematic diagram illustrating the outline of the edge PCC of FIG. 1 in connection with an edge database and a core database forming a routing system.
[0019] FIGs. 3a-3b illustrate signalling associated with a subscription flow and call flow procedure associated with a subscription request.
[0020] FIGs. 4a-4b illustrate signalling associated with a subscription flow and call flow procedure associated with the subscription request of Gold Package.
[0021] FIGs. 5a-5b illustrate signalling associated with a subscription flow and call flow procedure associated with the subscription request of package associated with IoT devices.
[0022] FIG. 6 is a flowchart illustrating a method of catering a plurality of services associated with a subscriber.
[0023] It should be noted that the accompanying figures are intended to present illustrations of exemplary embodiments of the present disclosure. These figures are not intended to limit the scope of the present disclosure. It should also be noted that accompanying figures are not necessarily drawn to scale.
DETAILED DESCRIPTION
[0024] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present technology. It will be apparent, however, to one skilled in the art that the present technology can be practiced without these specific details. In other

instances, structures and devices are shown in block diagram form only in order to avoid obscuring the present technology.
[0025] Reference in this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present technology. The appearance of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but no other embodiments.
[0026] Moreover, although the following description contains many specifics for the purposes of illustration, anyone skilled in the art will appreciate that many variations and/or alterations to said details are within the scope of the present technology. Similarly, although many of the features of the present technology are described in terms of each other, or in conjunction with each other, one skilled in the art will appreciate that many of these features can be provided independently of other features. Accordingly, this description of the present technology is set forth without any loss of generality to, and without imposing limitations upon, the present technology.
[0027] The present disclosure provides a method of catering a plurality of services associated with a subscriber. The method includes receiving a subscription request from the subscriber, wherein the subscription request is associated with the plurality of services and splitting the plurality of services between an edge compute and a core compute based on latency requirement and usage requirement of the plurality of services. An offer further comprises a set of encoding profiles, where each encoding profile in a set of encoding profiles comprises a plurality of encoding parameters related to end-user device capabilities and send the selected version of the content object to the end-user device using the determined delivery of offers and services. Offer includes

services, creatives and versions of the content object, selected by the requesting device. Encoding profiles may include a selection of selected offers/packages, to execute on edge and core database based on encoding parameters where encoding parameters are usage and latency related to the hardware and software capabilities of the requesting device. Splitting the plurality of services between the edge compute and the core compute further comprising, one or more of: fulfilling low usage low latency services from the edge compute, fulfilling low usage high latency services from the core compute, fulfilling high usage high latency services from the core compute and fulfilling high usage low latency services from the edge compute. Splitting the plurality of services between the edge compute and the core compute further includes determining whether the plurality of services is latency sensitive or latency insensitive.
[0028] Further, the present disclosure discloses a routing system for catering a plurality of services associated with a subscriber. The routing system includes an edge policy charging control (PCC) deployed at an edge location. The edge PCC includes an interface for receiving a subscription request from the subscriber, wherein the subscription request is associated with the plurality of services. The edge PCC further includes a compute unit for splitting the plurality of services based on latency requirement and usage requirement of the plurality of services. The edge PCC further includes a routing unit for routing the plurality of services between an edge compute and a core compute. The compute unit directs the routing unit to fulfil low usage low latency services from the edge compute, fulfil low usage high latency services from the core compute, fulfil high usage high latency services from the core compute and fulfil high usage low latency services from the edge compute. The compute unit is configured to determine whether the plurality of services is latency sensitive or latency insensitive.
[0029] The offer is associated with the plurality of services and the subscription request is in response to the offer exchanged between the subscriber and a service provider. The core compute is a core database and the edge compute is an edge database. The subscription request is initiated by the subscriber. The

subscription request comprises one or more of: an application request, a data request, or a file request.
[0030] The Edge PCC is deployed at an edge server, where the edge database stores data selected from a set of content physical properties, content storage locations, content usage terms, content usage rights, content playback duration, content prefix cache status, content network routing cost information and combinations thereof for controlling a status of the response to the offer (or an offer response status).
[0031] Now, referring to the figures to understand the aforementioned features in detail.
[0032] FIG. 1 is a schematic diagram illustrating an outline of the edge PCC 100 in accordance with the present disclosure. The term "edge" is an edge network that refers to endpoints and first hop from the endpoints into "center" or core of a main network. In enterprise, the endpoints are PCs, including their associated adapters, and modems for connecting to carriers, and various connected devices. The edge network also includes Wi-Fi access points, and desktop and wiring closet switches. The Policy and Charging Control (PCC), also known as an integrated PCC, is a policy management that enables operators to dynamically control network resources with real-time policies based on service, subscriber or usage context. The edge-based PCC or edge PCC 100 can be an application residing at edge node or any electronic devices operated by a subscriber/user.
[0033] As shown in the FIG. 1, in the edge PCC 100, an Application Function (AF) 110 is an element implementing applications that require dynamic policy and/or charging control of traffic plane resources. A Policy and Charging Enforcement Function (PCEF) 120 provides service data flow detection, charging, and policy enforcement of the user plane traffic. Further, a Policy and Charging Rules Function (PCRF) 130 is a separate logical node and sits in between of the Application function (e.g., service offering sources/applications on subscriber device), where services (e.g., for example, stream live match, telecommunication-based services, data-file request, and the like) are initiated and service characteristics are negotiated, and the user plane where the actual service is being

provided. The PCRF 130 provides policy and flow-based charging control functions, using subscriber data stored in a Subscription Profile Repository (SPR) 150 (defining the subscription details that user has subscribed to for example, Gold Package subscription for viewing the Direct-to-Home (DTH) television services). The PCRF 130 receives service Information (e.g., application identifier, type of media, bandwidth, IP address and port number) from the AF 110 over the Rx interface, The Rx interface is used to exchange Flow Based Charging control information between the Charging Rules Function (CRF) and the Application Function (AF) and uses this to install PCC rules into the PCEF 120 which in turn ensures that only authorized media flows associated with the requested services are allowed where PCC rules are; Policy and charging control (PCC) rules define the treatment to apply to subscriber traffic based on the application being used by the subscriber (for example, Facebook) or based on the Layer 3 and Layer 4 service data flow (SDF) information for the IP flow (for example, the source and destination IP addresses) and that the correct bandwidth, charging and priority are applied. The PCEF 120 provides real-time charging information to an Online Charging System (OCS) 140.
[0034] The AF 110 may modify session Information at any time, for example due to an AF session modification or internal AF trigger. Modification is achieved by the AF 110 sending an AA-Request (AAR) command to the PCRF 130 over the Rx reference point containing the Media-Component-Description Attribute-Value Pairs (AVPs), with the updated service Information as defined in communication standards. The AA-Request (AAR), which is indicated by setting the Command-Code field to 265 and the 'R bit in the Command Flags field, is used to request authentication and/or authorization. The AAR command is sent by an AF to the PCRF in order to provide it with the session information. The Media-Component-Description AVP contains service information for a single media component within an AF session or the AF signalling information. An application level session established by an application level signalling protocol offered by the AF that requires a session set-up with explicit session description before the use of the service. The information may be used by the PCRF to determine authorized

QoS and IP flow classifiers for bearer authorization and PCC rule selection. The PCRF 130 processes the received service information according to the operator policy and may decide whether the request is accepted or not. If the request is accepted, the PCRF 130 updates the pre-existing service information with the new information. The updated service information may require the PCRF 130 to create, modify or delete the related PCC rules and provide the updated information towards the PCEF 120 over the Gx reference point where The Gx reference point is located between the Policy and Charging Rules Function (PCRF) and the Policy and Charging Enforcement Function (PCEF). The Gx reference point is used for provisioning and removal of PCC rules from the PCRF to the PCEF and the transmission of traffic plane events from the PCEF to the PCRF as specified in communication standards. The procedures used to update the Authorized QoS for the affected IP-CAN bearer are also specified in communication standards. Currently specified procedures for modification of the service information for PCC provide for the immediate activation, replacement and removal of filter description information at the PCEF 120.
[0035] In addition to aforementioned components and functioning thereof of the edge PCC 100, the present disclosure provides provisions to the edge PCC 100 to split a plurality of services associated with a subscriber based on latency and usage requirement. In order to achieve this, the edge PCC 100 further includes/ is associated to a compute unit 160 (also refers as "latency/usage requirement compute unit 160) and a routing unit 170.
[0036] As can be derived from aforementioned description, the PCEF 120 can be an access gateway with the plurality of services can include request to access social media application, request to steam live sports match at 5G internet speed, request to update any online applications, and the like. Also, the service request can include, for example, one or more of: an application request, a data request, or a file request.
[0037] In response to exchanging the offers between the subscriber and the service provider, the latency/usage requirement compute unit 160 can be configured to tag the offers based on prospective usage and quality of service.

That is, each offer is categorized based on two parameters i.e., usage and latency such as High Usage at High Latency, Low Usage at High Latency, High Usage at Low Latency, Low Usage at Low Latency.
[0038] Further, the latency/usage requirement compute unit 160 can be configured to split the plurality of services associated with the offer(s) based on the latency and usage requirement. The services are requests, initiated by the subscriber for accessing any offer and package as the subscription request at a service level. For example, the offer can include "Watch Live Cricket Streaming coverage of International cricket matches, series, tournaments and online recharge your mobile phone directly at a click by accessing X link". The subscriber interested in the offer and/or opting for the offer, generates/initiates the subscription request (i.e., user selecting the offer). Further, each offer can include an encoding profile in a set of encoding profiles, where each encoding profile in the set of encoding profiles comprises a plurality of encoding parameters related to end-user device capabilities and send the selected version of the content object to the end-user device using the determined delivery of offers and services. Offer includes services, creatives and versions of the content object, selected by the requesting device. Encoding profiles may include a selection of selected offers/packages, to execute on edge and core database based on encoding parameters where encoding parameters are usage and latency related to the hardware and software capabilities of the requesting device.
[0039] Once the subscriber, as per the interest, selects the offer then the selected offer, from the subscriber, can be transmitted to the latency/usage requirement compute unit 160. Further, the latency/usage requirement compute unit 160 can be configured to determine whether the plurality of services is latency sensitive or latency insensitive. For example, when the subscriber/user requested the service such as accessing of real time online education class, viewing a live cricket match on 5G connection rate and to load emails on 3G connection rate. In response to received requests, the latency/usage requirement compute unit 160 can be configured to prioritize the request for viewing live match instead of the request for loading emails. Since, the service i.e., live match

is on the 5G connection rate (without interruption) so this request is latency sensitive and the request for loading email is latency in-sensitive.
[0040] The latency/usage requirement compute unit 160 further communicates with the routing unit 170 in order to route these requests (i.e., latency sensitive and latency in-sensitive) that are associated with the services between an edge compute network 210 and a core compute network 220 in a routing system 200 (as shown in FIG. 2). The edge compute network 210 is a part of a distributed computing topology that brings computation and data storage closer to the subscriber devices where it is being gathered. The edge computing refers to compute power moving closer to the network edge, i.e., closer to the subscriber/user. The core compute network 220 connects the various zones within a data center and to other data centers, with switches and routers. Each of the edge compute network 210 and the core compute network 220 comprises at least one database that is an edge database 202 and a core database 204, responsible to provide data storage and data accessing facilities.
[0041] The edge database 202 and core database 204 stores data selected from the set of content physical properties, content storage locations, content usage terms, content usage rights, content playback duration, content prefix cache status, content network routing cost information, and combinations thereof for controlling an offer response status.
[0042] Referring to the FIG. 2, the latency/usage requirement compute unit 160 can be configured to direct the routing unit 170 to fulfil the low usage low latency services from the edge compute network 210, fulfil low usage high latency services from the core compute network 220, fulfil high usage high latency services from the core compute network 220 and fulfil high usage low latency services from the edge compute 210. Thus, by virtue of the proposed method, the low latency subscriptions can be routed to the edge database 202 and high Latency subscriptions can be routed to the core database 204 (Voice or SMS by default can go to core/common database 204). Hence, the request coming to the edge compute network 210 will get faster response as a load at the edge database 202 is reduced.

[0043] Considering the above example, where the user receives the "Watch Live Cricket Streaming coverage of International cricket matches, series, tournaments and online recharge your mobile phone directly at a click by accessing X link. As per the interest, the user selects the offer. The selected offer, from the user, are received as a request at the edge PCC 100, the edge PCC 100 splits the offer based on usage and latency requirement. That is, the low latency offers (like streaming of cricket matches) will route towards the edge database 202 and another high latency offer (Mobile number recharge) will route at the core database 204. The edge PCC 100 computes both the offers/services behavior as to which one is latency-sensitive, and which is latency in-sensitive.
[0044] FIGs. 3a-3b illustrate signalling associated with a subscription flow and call flow procedure associated with the subscription request.
[0045] Referring to FIG. 3a, an Application Programming Interface (API) gateway 302 (residing at the electronic device (not shown) operated by the user) transmits the subscription request initiated by the subscriber or the access gateway 120 of FIG. 1. when the subscriber starts consuming the services. The subscription request indicates to subscribe the "offer with latency sensitive services (LSO)" only. The subscription request is transmitted for processing to the edge PCC 100. The latency/usage requirement compute unit 160 can be configured to tag the offers that are LSO based on the latency and usage requirement of the user/subscriber. The LSO tagged offers are then routed, by the routing unit 170, to the edge compute network 210 comprising the edge database 202. Once, the subscription request is registered, the acknowledgement (success) message is transmitted to the subscriber (see steps 1-3 of FIG. 3a).
[0046] Similarly, the API gateway 302 (residing at the electronic device (not shown) operated by the user) transmits another subscription request indicating to subscribe the "offer with latency in-sensitive services (LIO)" only. The subscription request is transmitted for processing to the edge PCC 100. The latency/usage requirement compute unit 160 can be configured to tag the offers that are LIO based on the latency and usage requirement of the user/subscriber. The LIO tagged offers are then routed, by the routing unit 170, to the core

compute network 220 comprising the core database 204. Once, the subscription request is registered the acknowledgement (success) message is transmitted to the subscriber (see steps 4-6 of FIG. 3 a).
[0047] Both the tagged LSO and LIO are then transmitted by the edge PCC 100 to the edge database 202 as 2-Zone customer for storage w here 2-Zone customer describes that the customer subscribes to offer with latency sensitive and latency insensitive both. Tagging LSO and LIO of the offers defines the prospective usage and quality of service.
[0048] Referring to FIG. 3b, where the call flow of the aforementioned registered subscription request (as in FIG. 3a) is illustrated. The gateway (may be API gateway not shown) can detect an event for LSO i.e., events such as receiving/detecting the services associated with LSO to the edge PCC 100. The event can be detected by an edge PCC client (not shown) residing at the electronic device operated by the user. The edge PCC client is capable of communicating with the edge database 202 over the edge compute network 210. The event for LSO is signaled to the edge PCC 100 that can be configured to identify LSO, fetch details associated with the LSO, process the event and route to the edge database 202. The LSO event response is transmitted back to the gateway (Steps 1-3 of FIG. 3b).
[0049] Similarly, in FIG. 3b, the gateway can detect the event for LIO i.e., events such as receiving/detecting the services associated with LIO to the edge PCC 100. The event can be detected by an edge PCC client (not shown) residing at the electronic device operated by the user. The edge PCC client is capable of communicating with the edge database 202 over the edge compute network 210. The event for LIO is signaled to the edge PCC 100 that can be configured to identify LIO, fetch details associated with LIO, process the event and route to the edge database 202. The LIO event response is transmitted back to the gateway (Steps 4-6 of FIG. 3b).
[0050] FIGs. 4a-4b illustrate signalling associated with a subscription flow and call flow procedure associated with the subscription request of Gold Package.

[0051] For example, the information associated with Gold Package comprise of Monetary Balance-1000$, HD Video package-1000GB Quota @ lGbps and Connected Cars Package. Package splits on the basis of Usage and latency. Low Latency and Usage service: Monetary Balance-1000$ and Connected Cars Package, High Latency and Usage service: Video package-1000GB Quota® lGbps
With above information subscriber, the edge PCC 100 provisions the subscriber 1000$ monetary balance and car's package subscription in the edge database 202. Further, the edge PCC 100 provisions the FID video package subscription will be provisioned at the core database 204. As the HD video package is where the subscriber will use large amount of data, hence the edge PCC 100 provides large slice to the subscriber so that network traffic will be less. The connected cars package will be stored in the edge database 202 (steps 1-5 of FIG. 4a).
[0052] Referring to FIG. 4b, where the call flow of the aforementioned registered subscription request (as in FIG. 4a) is illustrated. Once the API gateway 302 initiates the request as a initial request, the edge PCC 100 can be configured to lookup the edge database 202 as initially all the subscriptions stored in the edge database 202 will be required. Further, the edge PCC 100 can be configured to lookup the core database 202 as well to know all the subscriptions and transmits the lookup success response. Similarly, when the API gateway 302 initiates the request for HD video, then the edge PCC 100 fetches information from the core database 204 only, as the request is for high latency package and HD video request success response message is transmitted back to the API gateway. Similarly, when the API gateway initiates the request for connected cars, then the edge PCC 100 fetches information from the edge database 204 only, as the request is for low latency package and connected cars request success response message is transmitted back to the API gateway (steps 1-10 of FIG. 4b).
[0053] FIGs. 5a-5b illustrate signalling associated with a subscription flow and call flow procedure associated with the subscription request of package associated with IoT devices.

[0054] In this scenario, an enterprise subscriber opts for package for Internet-of-Things (IOT) devices that is for remote stations monitoring. Hence, the edge PCC 100 categorizes the low usage and high latency. As remote stations information will be sent by IOT devices on some interval.
[0055] Referring to FIG. 5a, the API gateway 302 transmits a request i.e., provisioning for IoT offer to the edge PCC 100. The edge PCC 100 processes the request and routes the package subscription information in the core database 204 and transmit the request success response to the API gateway (steps 1-3 of FIG. 5a).
[0056] Referring to FIG. 5b, the API gateway initiates the request to the edge PCC 100, where the edge PCC 100 lookup the edge database 202 as initially all the subscriptions will be required. Further, the edge PCC 100 will look up the core database 204 as well to know all the subscriptions and transmits the request success response message back to the API gateway. Further, the API gateway initiates the update request for IOT package with the edge PCC 100. The edge PCC 100 will fetch the information from the core database 204 only as the request is for high latency package and transmits the request success response message back to the gateway, (steps 1-7 of FIG. 5b).
[0057] As the edge database 202 is deployed closer to the user location, so all the request deployed towards edge network 210 is treated as latency sensitive request and latency in-sensitive requests are routed towards core database 204. As the requests towards edge database 202 are low latency requests, so response will be quick. Also, the splitting of subscriptions based on required latency brings makes the system performance predictable.
[0058] FIG. 6 is a flowchart 600 illustrating a method of catering the plurality of services associated with the subscriber.
[0059] As discussed with reference to the FIG. 1, the edge PCC 100 through the interface, at step 602, can be configured to receive the subscription request from the subscriber, where the subscription request is associated with the plurality of services.

[0060] At step 604, the edge PCC 100 can be configured to split the plurality of services between the edge database 202 and the core database 204 based on latency requirement and usage requirement of the plurality of services.
[0061] The various actions, acts, blocks, steps, or the like in the flow chart 600 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the present disclosure.
[0062] It will be understood that the devices and the databases referred to in the previous sections are not necessarily utilized together method or system of the embodiments. Rather, these devices are merely exemplary of the various devices that may be implemented within a computing device or the server device, and can be implemented in exemplary another device, and other devices as appropriate, that can communicate via a network to the exemplary server device.
[0063] It will be appreciated that several of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations, or improvements therein may be subsequently made by those skilled in the art, which are also intended to be encompassed by the following claims.
[0064] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. It will be appreciated that several of the above disclosed and other features and functions, or alternatives thereof, may be combined into other systems, methods, or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations, or improvements therein may subsequently be made by those skilled in the art without departing from the scope of the present disclosure as encompassed by the following claims.
[0065] The methods and processes described herein may have fewer or additional steps or states and the steps or states may be performed in a different order. Not all steps or states need to be reached. The methods and processes

described herein may be embodied in, and fully or partially automated via, software code modules executed by one or more general purpose computers. The code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all of the methods may alternatively be embodied in whole or in part in specialized computer hardware.
[0066] The results of the disclosed methods may be stored in any type of computer data repository, such as relational databases and flat file systems that use volatile and/or non-volatile memory (e.g., magnetic disk storage, optical storage, EEPROM and/or solid-state RAM).
[0067] The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
[0068] Moreover, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor device, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general-purpose processor device can be a microprocessor, but in the alternative, the processor device can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor device can include electrical circuitry configured to process computer-executable instructions. In another

embodiment, a processor device includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor device can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor device may also include primarily analog components. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
[0069] The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor device, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An exemplary storage medium can be coupled to the processor device such that the processor device can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor device. The processor device and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor device and the storage medium can reside as discrete components in a user terminal.
[0070] Conditional language used herein, such as, among others, "can," "may," "might," "may," "e.g.," and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments

necessarily include logic for deciding, with or without other input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms "comprising," "including," "having," and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term "or" is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term "or" means one, some, or all of the elements in the list.
[0071] Disjunctive language such as the phrase "at least one of X, Y, Z," unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[0072] The foregoing descriptions of specific embodiments of the present technology have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present technology to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, to thereby enable others skilled in the art to best utilize the present technology and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstance may suggest or render expedient, but such are intended to cover the application or implementation without departing from the spirit or scope of the claims of the present technology.

CLAIMS

We Claim:

1. A method of catering a plurality of services associated with a subscriber,
the method comprising:
receiving, by an edge policy charging control (PCC) (100), a subscription request from the subscriber, wherein the subscription request is associated with the plurality of services; and
splitting, by the edge-PCC (100), the plurality of services between an edge compute (202) and a core compute (204) based on latency requirement and usage requirement of the plurality of services.
2. The method as claimed in claim 1, wherein an offer is associated with the plurality of services and the subscription request is in response to the offer exchanged between the subscriber and a service provider.
3. The method as claimed in claim 2, wherein each encoding profile in a set of encoding profiles comprises a plurality of encoding parameters related to end-user device capabilities and send the selected version of the content object to the end-user device using the determined delivery of offers and services.
4. The method as claimed in claim 1, wherein the subscription request is initiated by the subscriber.
5. The method as claimed in claim 1, wherein splitting the plurality of services between the edge compute and the core compute further comprising, one or more of:
fulfilling low usage low latency services from the edge compute; fulfilling low usage high latency services from the core compute; fulfilling high usage high latency services from the core compute; and fulfilling high usage low latency services from the edge compute.

6. The method as claimed in claim 1, wherein splitting the plurality of
services between the edge compute (202) and the core compute (204) further
comprising:
determining whether the plurality of services is latency sensitive or latency insensitive.
7. The method as claimed in claim 1, wherein the core compute is a core database (204) and the edge compute is an edge database (202).
8. A routing system (200) for catering a plurality of services associated with a subscriber, wherein the routing system (200) includes an edge policy charging control (PCC) (100) deployed at an edge location, the edge PCC comprising:
an interface for receiving a subscription request from the subscriber, wherein the subscription request is associated with the plurality of services;
a compute unit (160) for splitting the plurality of services based on latency requirement and usage requirement of the plurality of services; and
a routing unit (170) for routing the plurality of services between an edge compute and a core compute.
9. The routing system (200) as claimed in claim 8, wherein an offer is associated with the plurality of services and the subscription request is in response to the offer exchanged between the subscriber and a service provider.
10. The routing system (200) as claimed in claim 8, wherein the subscription request is initiated by the subscriber.
11. The routing system (200) as claimed in claim 8, wherein the compute unit (160) directs the routing unit (170) to:
fulfil low usage low latency services from the edge compute; fulfil low usage high latency services from the core compute;

fulfil high usage high latency services from the core compute; and fulfil high usage low latency services from the edge compute.
12. The routing system (200) as claimed in claim 8, wherein the compute unit (160) is configured to determine whether the plurality of services is latency sensitive or latency insensitive.
13. The routing system (200) as claimed in claim 8, wherein the core compute is a core database (204) and the edge compute is an edge database (202).
14. The routing system (200) as claimed in claim 8, wherein the subscription request comprises one or more of: an application request, a data request or as like.
15. The routing system (200) as claimed in claim 8, wherein the Edge PCC (100) deployed at an edge server, wherein the edge database stores data selected from a set of content physical properties, content storage locations, content usage terms, content usage rights, content playback duration, content prefix cache status, content network routing cost information and combinations thereof for controlling an offer response status.

Documents

Application Documents

# Name Date
1 202111012129-STATEMENT OF UNDERTAKING (FORM 3) [22-03-2021(online)].pdf 2021-03-22
2 202111012129-POWER OF AUTHORITY [22-03-2021(online)].pdf 2021-03-22
3 202111012129-FORM 1 [22-03-2021(online)].pdf 2021-03-22
4 202111012129-DRAWINGS [22-03-2021(online)].pdf 2021-03-22
5 202111012129-DECLARATION OF INVENTORSHIP (FORM 5) [22-03-2021(online)].pdf 2021-03-22
6 202111012129-COMPLETE SPECIFICATION [22-03-2021(online)].pdf 2021-03-22