Sign In to Follow Application
View All Documents & Correspondence

Method And System For Deploying Intelligent Edge Cluster Model

Abstract: [0001] The present disclosure discloses a method for deploying an intelligent edge cluster model. The intelligent edge cluster model includes a plurality of edge nodes (102a-102e) and a master controller (310). Each of the plurality of edge nodes (102a-102e) and the master controller (310) has corresponding one or more resources. Each of the one or more resources corresponding to the plurality of edge nodes (102a-102e) and the master controller (310) combine to form a virtual resource pool. The method includes checking, by the master controller (310), an application requirement and atleast one key performance indicator at a first edge node from the plurality of edge node (102a-102e). Further, the method includes dynamically assigning, by the master controller (310), a first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 August 2020
Publication Number
08/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
patent@ipmetrix.com
Parent Application

Applicants

STERLITE TECHNOLOGIES LIMITED
STERLITE TECHNOLOGIES LIMITED IFFCO Tower, 3rd Floor, Plot No.3, Sector 29, Gurgaon 122002, Haryana, India

Inventors

1. Puneet Kumar Agarwal
3rd Floor, Plot No. 3, IFFCO Tower, Sector 29, Gurugram, Haryana - 122002

Specification

DESC:TECHNICAL FIELD
[0001] The present disclosure relates to a wireless communication system, and more specifically relates to a method and a system for deploying an intelligent edge cluster model in the wireless communication system. The present application is based on and claims priority from an Indian Application Number 202011035654 filed on 19th August 2020 the disclosure of which is hereby incorporated by reference herein.

BACKGROUND
[0002] Due to the increasing demand of latency sensitive and bandwidth hungry applications, there is a need to deploy a near end edge network. The near end edge network may serve and fulfill the requirements of the high demanding applications in an effective way from its nearest possible coordinates. The demands of users can be served by both wireless network and wireline networks, as per its availability, so that, a multi-service near end edge network can be deployed to support fixed and mobility user requirements seamlessly. In order to serve the dynamic behaviour and specific demands from the applications, virtualization and cloud computing are very effective and the number of standard bodies and open communities are working in the same directions to build a framework for edge sites so that a multi-access computing can be adopted and served in an effective manner.
[0003] However, the biggest challenge for a service provider is to determine the right and optimum set of physical resources which they can deploy at near end edge sites as per the realized and practical application demands and not on the futuristic and predictable requirements. Furthermore, there is a need to effectively and dynamically build/fulfill edge infrastructure requirements based on business triggers/requirements rather than on technological progression.
[0004] US20200145337A1 discloses various approaches for implementing platform resource management. In an edge computing system deployment, an edge computing device includes processing circuitry coupled to a memory. The processing circuitry is configured to obtain, from an orchestration provider, a Service Level Objective (SLO) (or a Service Level Agreement (SLA)) that defines usage of an accessible feature of the edge computing device by a container executing on a virtual machine within the edge computing system. A computation model is retrieved based on at least one key performance indicator (KPI) specified in the SLO. The defined usage of the accessible feature is mapped to a plurality of feature controls using the retrieved computation model. The plurality of feature controls is associated with platform resources of the edge computing device that are pre-allocated to the container. The usage of the platform resources allocated to the container is monitored using the plurality of feature controls.
[0005] CN111327651A discloses a method for providing a resource downloading method, a resource downloading device, an edge node, and a storage medium, and relates to the technical field of Internet of things. According to the method and the device, the resources are shared among all edge nodes of the same local area network, when any edge node needs to download the resources, the resources can be downloaded from other edge nodes of the local area network, so that the function of near downloading is achieved, compared with the method and the device for downloading the resources from the cloud, the network overhead is greatly saved, the network time delay is reduced, and the resource downloading efficiency is improved. Meanwhile, in a stably running system, the edge nodes can download resources without keeping communication with the cloud through the Internet, so that the performance overhead of the edge nodes is greatly reduced.
[0006] Thus, it is desired to address the above mentioned disadvantages or other shortcomings or at least provide a useful alternative.
[0007] Any references to methods, apparatus, or documents of the prior art are not to be taken as constituting any evidence or admission that they formed, or form part of the common general knowledge.

OBJECT OF THE DISCLOSURE
[0008] A principal object of the present disclosure is to provide a method and a system for deploying an intelligent edge cluster model in a wireless communication system.
[0009] Another object of the present disclosure is to provide dynamic sharing and allocation of resources of an edge node by a master edge node to a user application in a local edge cluster based on application requirements and real-time edge node key performance indicators (KPIs).
[0010] Another object of the present disclosure is to provide an effectively and dynamically build/fulfill edge infrastructure requirements based on business triggers/requirements.
[0011] Another object of the present disclosure is to provide an effectively and dynamically build/fulfill edge infrastructure requirements based on power, space, and ambient environmental constraints at edge site locations.
[0012] Another object of the present disclosure is to provide an effectively and dynamically build/fulfill edge infrastructure requirements with limited support of technical equipment.
[0013] Another object of the present disclosure is to provide an effectively and dynamically build/fulfill edge infrastructure requirements without deploying high energy consumption systems/equipment’s at the edge site locations.
[0014] Another object of the present disclosure is to provide a dynamic and adaptive edge infrastructure. The dynamic and adaptive edge infrastructure can be accessed across an edge network to serve the dynamic and challenging service demands.
[0015] Another object of the present disclosure is to realize and justify the cost per bit per near-end edge node investment by a service provider.

SUMMARY
[0016] Accordingly, the present disclosure provides a method for deploying an intelligent edge cluster model. The intelligent edge cluster model includes a plurality of edge nodes and a master controller. Each of the plurality of edge nodes and the master controller has corresponding one or more resources and each of the one or more resources corresponding to the plurality of edge nodes and the master controller combine to form a virtual resource pool. The virtual resource pool is capable to fetch the one or more resources from any of the plurality of edge node and the master controller. The method includes checking, by the master controller, an application requirement and atleast one key performance indicator at a first edge node from the plurality of edge node. Further, the method includes dynamically assigning, by the master controller, a first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node.
[0017] Further, the method includes instructing, by the master controller, one or more commands to another edge node in the intelligent edge cluster model for assigning of one or more resources to the first edge node.
[0018] Alternatively, dynamically assigning, by the master controller, the first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node includes assigning the first resource corresponding to a second edge node in the edge cluster, wherein the second edge node comprises a count of resources more than resources required by an application executed at the first edge node.
[0019] Alternatively, dynamically assigning, by the master controller, the first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node includes assigning the first resource from the nearest edge node to the first edge node, when the first edge node has a pre-defined latency requirement, the nearest node is identified by the master controller based on the application requirement at the first edge node and one or more KPIs of the nearest edge node.
[0020] Alternatively, dynamically assigning, by the master controller, the first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node includes assigning the first resource to the first edge node, wherein the first resource corresponds to the one or more resource associated with the master controller in the intelligent edge cluster model.
[0021] Further, the method includes dynamically assigning a second resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node. The first resource corresponds to the one or more resource associated with a second edge node and the second resource corresponds to the one or more resource associated with a third edge node.
[0022] The KPIs include one or more of power, space, time, and network links associated with each of the plurality of edge nodes.
[0023] The application requirement includes one or more of bandwidth, latency and scalability.
[0024] The one or more resources includes one or more of physical resources, functions, applications, and virtual machines.
[0025] The method further comprises determining, by the master controller, that the application requirement and the key performance indicator at the first edge node from the plurality of edge node is not met using the first resource. Further, the method comprises sending, by the master controller, a request to assign one or more resources to a service orchestration entity based on the determination. The request comprises the application requirement and atleast one key performance indicators. Further, the method comprises dynamically assigning, by the master controller, the one or more resources from the service orchestration entity based on the request.
[0026] Alternatively, dynamically assigning, by the master controller, the one or more resource from the service orchestration entity comprises reallocating the edge node virtually in a second edge cluster network by the service orchestration entity.
[0027] Alternatively, dynamically assigning, by the master controller, the one or more resource from the service orchestration entity includes identifying another intelligent edge cluster model or a second edge cluster network to meet the application requirement and the one or more key performance indicators at the first edge node, and dynamically assigning, by the master controller, the one or more resource from the second edge cluster network through the service orchestration entity.
[0028] Accordingly, the present disclosure provides a master controller for deploying an intelligent edge cluster model. The intelligent edge cluster model comprising a plurality of edge nodes and a master controller. Each of the plurality of edge nodes and the master controller has corresponding one or more resources. Each of the one or more resources corresponding to the plurality of edge nodes and the master controller combine to form a virtual resource pool. The virtual resource pool is capable to fetch the one or more resources from any of the plurality of edge nodes and the master controller. The master controller includes a processor coupled with a memory. The processor is configured to check an application requirement and atleast one key performance indicators at a first edge node from the plurality of edge node. The processor is configured to dynamically assign a first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node.
[0029] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.

BRIEF DESCRIPTION OF FIGURES
[0030] The method and system are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various drawings. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[0031] FIG. 1 is an example illustration of multi-service edge cluster connectivity architecture.
[0032] FIG. 2 is an example illustration of node reassignment framework from one a cluster to another cluster.
[0033] FIG. 3 illustrates a block diagram of a cluster master edge node.
[0034] FIG. 4 is a flow chart illustrating a method for deploying an intelligent edge cluster model.
[0035] FIG. 5 is an example flow chart illustrating a method for managing and controlling a dynamic edge node participation and edge cluster infrastructure allocation by the cluster master edge node.
[0036] FIG. 6 is an example flow chart illustrating a method for dynamically selecting an edge node from a plurality of the edge nodes.
[0037] FIG. 7 is an example flow chart illustrating a method for joining a new edge node into a cluster network.
[0038] FIG. 8 is an example flow chart illustrating a method for handling resource requirements in the multi-service edge cluster connectivity architecture.

DETAILED DESCRIPTION
[0039] In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a thorough understanding of the embodiment of the invention. However, it will be obvious to a person skilled in the art that the embodiments of the invention may be practiced with or without these specific details. In other instances, well known methods, procedures, and components have not been described in detail so as not to unnecessarily obscure aspects of the embodiments of the invention.
[0040] Furthermore, it will be clear that the invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without parting from the scope of the invention.
[0041] The accompanying drawings are used to help easily understand various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents, and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
[0042] The present disclosure achieves a method for deploying an intelligent edge cluster model. The intelligent edge cluster model includes a plurality of edge nodes and a master controller. Each of the plurality of edge nodes and the master controller has corresponding one or more resources and each of the one or more resources corresponding to the plurality of edge nodes and the master controller combine to form a virtual resource pool. The virtual resource pool is capable to fetch the one or more resources from any of the plurality of edge nodes and the master controller. The method includes checking, by the master controller, an application requirement and atleast one key performance indicator at a first edge node from the plurality of edge node. Further, the method includes dynamically assigning, by the master controller, a first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node.
[0043] Referring now to the drawings, and more particularly to FIGS. 1 through 8.
[0044] FIG. 1 is an example illustration of multi-service edge cluster connectivity architecture (1000). The multi-service edge cluster connectivity architecture (1000) includes a plurality of edge nodes (102a-102e) and a cluster master edge node (104). The cluster master edge node (104) includes a master controller (310). The cluster master edge node (104) may be selected or decided among any of the plurality of edge nodes (102a-102e). The cluster master edge node (104) may be one of the plurality of edge node (102a-102e), having user preferred combination of space, power and ambient temperature. A user can select an edge node as cluster master edge node (104) from any of the plurality of edge node (102a-102e), based on user preference or computational requirement. The cluster master edge node (104) may comprise a master controller (310) which may provide a plurality of control functions to the cluster master edge node (104). In another example, any of the plurality of edge nodes (102a-102e) may have a master controller, which provides controlling functions when the edge node is selected as the cluster master edge node (104). In another example, the cluster master edge node (104) may be randomly selected from the plurality of edge nodes (102a-102e). Upon selecting one edge node as cluster master edge node (104), all remaining edge nodes may become host nodes. In another example, cluster master edge node (104) and master controller (310) may be referred alternatively. The operations and functions of the master controller (310) are explained in FIG. 3. The edge node (102a-102e) is a generic way of referring to any edge device, an edge server, or an edge gateway on which edge computing can be performed. The edge node (102a-102e) is also called an edge computing unit. Further, the edge nodes (102a-102c) communicate with each other to form an edge cluster (106a). The edge cluster (106a) is in a ring arrangement. In another example, the edge cluster (106a) is in a hub arrangement. In another example, the edge cluster (106a) may form any shape based on user requirements. The edge nodes (102a, 102c, 102d, and 102e) communicate with each other to form another edge cluster (106b). The communication among the edge nodes (102a-102e) is established based on a wired network and/or wireless network. Further, the cluster master edge node (104) communicates with the edge node (102a and 102d). The cluster master edge node (104) acts as a brain of the multi-service edge cluster connectivity architecture (1000) that assists an intelligent and dynamic assignment of resources in the cluster network and takes care of flexible utilization of resources within the cluster of edge nodes (102a-102e) and the cluster master edge node (104).
[0045] The cluster master edge node (104) would be at a customer point of purchase (PoP) or a central office or any aggregate site location which would have adequate space, power, and environmental conditions to host the access infrastructure and can also equip the other automation and orchestration functionalities. The edge nodes (102a-102e) would be included at the time of cluster formation as well the edge node (102a-102e) can participate in the cluster on the run time basis as well. This participation would be on a dynamic basis. Upon adding a new edge node in the network cluster, it may be checked if the newly added edge node is better suited as the cluster master edge node (104), based on the edge node KPIs or user preference or computational requirements. The newly added edge node may be dynamically selected as the cluster master edge node, if found better suited than the existing cluster master edge node (104).
[0046] In the multi-service edge cluster connectivity architecture (1000), each edge node (near edge nodes (102a-102e) and master edge node (104)) is associated with specific physical resources, which together form a virtual resource bank in the edge cluster. The cluster master edge node (104) checks the application requirement (bandwidth, latency and scalability) and real time KPIs at the edge node (e.g., edge node health, physical infrastructure - power, space and temperature, network links), based on which the resources (e.g., physical resources, functions, application, virtual machines) from the edge nodes (102a-102e) are dynamically assigned to the application by utilizing the virtual resource bank in the multi-service edge cluster connectivity architecture (1000). The function may be, for example, but not limited to, a network function, a service virtualization function, a resource management function, a node management function. The application may be, for example, but not limited to, a virtual reality (VR) application, an enterprise application, a content delivery application, a gaming application, and a networking application or the like.
[0047] Alternatively, the KPIs are determined based on one or more bandwidth associated with the edge node (102a-102e), the latency associated with the edge node (102a-102e), scalability, compute resources and Data Path (DP) performance of the edge node (102a-102e), a quality of service (QoS) associated with the edge node (102a-102e), user quality of experience associated with the edge node (102a-102e), an optimum resource utilization associated with the edge node (102a-102e), a network characteristics degradation associated with the edge node (102a-102e), an underlay or overlay network services, business demands, and overall SLA requirements. The compute resources and DP performance may be, for example, but not limited to, a Kernel data path (DP), a user space DP, a Fast Data Path, Single-root input/output virtualization, and a hardware offloaded DP.
[0048] Alternatively, the application requirement at the edge node may include application specific requirements such as scalability, latency, and bandwidth associated with the application. The application requirement may be corresponding to user application at the edge node which serves the user by providing one or more resources for facilitating the application. The application requirement may be corresponding to application specific key performance indicators such as user quality of experience, quality of service and user required service level agreements (SLAs).
[0049] The operations and functions of the edge cluster (106a-106b) are monitored and controlled by the cluster master edge node (104). The edge cluster (106a-106b) includes a resource pool and a storage policy based on the service provider requirements or third party requirements. In some scenarios, the edge cluster (106a-106b) is created by an administrator of the service provider and configured in the multi-service edge cluster connectivity architecture (1000). The cluster master edge node (104) can balance organization edge services between the edge clusters (106a-106b). The edge clusters (106a-106b) can use a specific storage policy that is originated by the service provider.
[0050] The cluster master edge node (104) can be used for dynamic sharing and allocation of edge node resources to a user application in a local edge cluster based on application requirements and real-time edge node key performance indicator(s) (KPIs).
[0051] Alternatively, the cluster master edge node (104) checks the application requirements or KPIs of the UE application. The KPIs of each edge node in the cluster include the edge node health related information (e.g., power, space and temperature requirements) and physical infrastructure status. The resource allocation and sharing by the cluster master edge node (104) are decided based on the application requirement and edge node details.
[0052] The cluster master edge node (104) is configured to dynamically select the edge nodes (102a-102e). The participation of the edge nodes (102a-102e) is decided on an overall minimum resource requirement. The overall minimum resource requirement of each edge node (102a-102e) is stored in a cluster network (not shown) or the cluster master edge node (104). The cluster network may be a self-adaptive edge cluster-based network. The overall minimum resource requirement of each of the edge nodes (102a-102e) is obtained by using various methods (e.g., past infrastructure usage trends or the like). The past infrastructure usage trends are monitored and trained by a machine learning model. The machine learning model may be, for example, but not limited to, a linear regression model, a logistic regression model, a decision tree model, and a random forest model. The cluster network has to maintain the optimum number of the edge nodes (102a-102e) in the edge cluster (106a and 106b). The optimum number of the edge nodes is determined based on key parameters. The key parameters may include bandwidth, scalability and latency requirements by one or more users in the edge cluster network. The optimum number of the edge nodes (102a-102e) in the cluster network provides the fast response of any request received from an application (not shown) executed in an electronic device/user equipment (not shown). The electronic device can be, for example, but not limited to a smart phone, a virtual reality device, an immersive system, a smart watch, a Personal Digital Assistant (PDA), a tablet computer, a laptop computer, and an Internet of Things (IoT).
[0053] Further, the edge nodes (102a-102e) may be added to the cluster network if there is an additional infrastructure available after a defined limit (i.e., threshold) of the minimum cluster infrastructure and also, a cluster border edge node may be transferred to other cluster(s) if there is a scarcity of the resources (definitely the transfer of the edge node would be based on the use case basis e.g., less latency-sensitive apps, etc.). The threshold of minimum cluster infrastructure is defined by the service provider.
[0054] Further, the participation of the edge nodes (102a-102e) in the cluster network may be dynamic and on run-time basis as well. If a new edge node is installed in the infrastructure, then the new edge node will send a request to the cluster master edge node (104). If the cluster master edge node (104) accepts the request, then the new edge node will be added to the cluster based on the acceptance (as shown in FIG. 7).
[0055] For instance, if the new edge node is installed in the infrastructure, then the new edge node will send requests to a first cluster master edge node and a second cluster master edge node. If the first cluster master edge node accepts the request, then the new edge node joins the cluster based on the acceptance of the first cluster master edge node. In an example, if a new edge node is installed, then the new edge node will send the requests to the nearby master edge cluster nodes. Whenever any edge node joins it will get broadcast addresses of cluster master nodes, which are nearby to that edge node. The edge node joins the cluster of whichever master cluster node responds first.
[0056] Alternatively, the edge cluster-based network performs dynamic sharing and intelligent optimization of the resources of the edge nodes (102a-102e) that assigns a right set of a virtualized infrastructure to a workload using the cluster master edge node (104). The workload is controlled by determining active edge nodes (102a-102e) in a predefined time using the cluster master edge node (104). The predefined time is set by the service provider. Alternatively, the cluster master edge node (104) is the intelligent node, which performs the calculations and comparisons of edge node KPIs. The cluster master edge node (104) analyzes the UE application requirement (based on its KPIs) and allocates resources of edge nodes dynamically such that the QoS is maintained at UE, and simultaneously resources of all the edge nodes are utilized in an optimum manner.
[0057] When one of the edge nodes (102a-102e) is running short of storage capacity, then the respective edge node (102a-102e) can send a request to the cluster master edge node (104) to fulfill temporary storage requirements. The cluster master edge node (104) checks a cluster storage bank (not shown) and assigns the best suitable storage infrastructure to the requested edge nodes (102a-102e). The cluster storage bank stores the resources. In intelligent content data networking, the edge nodes (102a-102e) maintain the caching segments to fulfill the high demanding content in quick response time and this will, in turn, save the backhaul bandwidth by not demanding the content from the regional storage servers and/or core DC storage servers every time. In case, if some particular edge nodes (102a-102e) experience some content being used frequently by their users, then the edge nodes (102a-102e) will cache that content at its location. But in case of unavailability of the storage, the edge nodes (102a-102e) can demand the storage from the cluster master edge node (104), which, in turn, will provide the necessary storage infrastructure from its nearest possible edge coordinates.
[0058] The method can be used to configure a multi-service edge cluster model for dynamic infrastructure management within the self-adaptive edge cluster-based network. The multi-service edge cluster model is deployed in the edge nodes (102a-102e) and the cluster master edge node (104). Further, the method can be used to provide a dynamic framework for an edge node cluster participation and an edge cluster infrastructure allocation by the cluster master edge node (104). The cluster master edge node (104) can be used to manage and control a dynamic edge node cluster participation and edge cluster infrastructure allocation based on a plurality of parameters. The plurality of parameters can be, for example, but not limited to the power usage of the edge node (102a-102e), a space of the edge node (102a-102e) and an ambient environmental conditions of the edge node (102a-102e), bandwidth, latency, scalability, QoS, user quality of experience, optimum resource utilization, network characteristics degradation, underlay network services, overlay network services, business demands, and a service-level agreement (SLA) requirements.
[0059] Consider a scenario, when the edge node (102a) is running short of storage capacity, then the edge node (102a) can send a request to the cluster master edge node (104) to fulfill temporary storage requirements. Based on the request, the cluster master edge node (104) checks the cluster storage virtual bank and assigns the best suitable storage infrastructure to the requested edge node (102a). In an example, in an intelligent content data networking (iCaching), the edge node (102a) maintains caching segments to fulfill the high demanding content in quick response time and this will in turns saves the backhaul bandwidth by not demanding the content from regional/core DC storage servers every time. Now here, if some particular edge node experiences some contents used frequently by its users, then the edge node will cache that content at its location. But in case of unavailability of the storage, the particular edge node may demand the storage from the master cluster edge node (104), which in turn will provide the necessary storage infra from its nearest possible edge coordinates. Now here, the master node (104) will decide the tenancy on the cluster edge based on the defined KPIs.
[0060] Further, one edge node can be a tenant of multiple clusters based on the dynamic user requirements coming on that particular edge node that may be due to some unpredicted occurred event. As per the cluster node request, the master edge node can provide the storage from the cluster sites to fulfill the temporary and immediate requirements.
[0061] If any cluster network does not fulfill augmented demand of the edge node, either that would be due to limitation of the capacity of the cluster bank or not meeting the application KPIs or not meeting up the Dynamic KPIs indicator requirement, then in these scenarios, it will send the request to the Global Service Orchestrator (GSO) (explained in FIG. 2) to suggest the cluster that can fulfill the augmented demand/requirement of the particular edge node. Now, in this case, the GSO (210) can check the requirement from the other nearby clusters, and based on the availability, it provides the temporary tenancy to the requested cluster edge node from the other nearby cluster edge node bank.
[0062] In another example, the invention may provide creation of a dynamic framework for participation of edge nodes within the edge cluster. One or more edge nodes may be added or removed from the edge cluster and the invention may provide dynamic interaction of all the edge nodes within the edge cluster. One or more resources corresponding to each of the edge node as well as the cluster master edge node may be shared among the edge nodes within the cluster, based on the application requirements and edge node key performance indicators. In another example, the invention may provide a model for dynamic resource management within the edge cluster, which is self-adaptive in nature. This means, the resource management within the edge cluster is dynamically controlled, based on the combined resource of the edge cluster, application requirements and edge node health (or KPIs).
[0063] FIG. 2 is an example illustration of a node reassignment framework (2000) from one cluster to another cluster. The node reassignment framework (2000) includes a plurality of cluster networks (220a-220c) and a service orchestration entity (e.g., Global Service Orchestrator (GSO)) (210). Each cluster network from the plurality of cluster networks (220a-220c) includes the cluster master edge node (104a-104c), respectively. The operations and functions of the cluster master edge node (104a-104c) are already explained in connection with FIG. 1. Further, each cluster network from the plurality of cluster networks (220a-220c) is communicated with the GSO (210).
[0064] Consider a scenario, if any cluster network does not fulfill the augmented demand of the edge node (102a-102e), either that would be due to limitation of the capacity of the cluster bank or not meeting the application KPIs or not meeting up the dynamic KPIs indicator requirement, then in these scenarios, it will send the request to the Global Service Orchestrator (GSO) to suggest the cluster that can fulfill the augmented requirement of the particular edge node. The GSO (210) can check the requirement from the other nearby clusters and based on the availability, the GSO (210) provides the temporary tenancy to the requested cluster edge node from the other nearby cluster edge node bank. If the cluster master node doesn’t meet the major application KPIs and other KPIs, then the master node will request the GSO to reallocate the edge node to another nearby cluster that can fulfill the demands. This request will only be generated by the cluster master node, if the requested edge node doesn’t have any dependency on the other cluster edge nodes, or in other words, it should not be a tenant or offering any tenancy.
[0065] FIG. 3 illustrates a block diagram of the cluster master edge node (104). The cluster master edge node (104) includes a master controller (310), a communicator (320), and a memory (330). The master controller (310) is coupled with the communicator (320) and the memory (330). The master controller (310) is configured to check the application requirement and the atleast one key performance indicators at the first edge node (102a) from the plurality of edge nodes (102a-102e).
[0066] After checking the application requirement and the atleast one key performance indicator at the first edge node (102a) from the plurality of edge node (102a-102e), the master controller (310) is configured to assign the first resource corresponding to the second edge node (102b) in the edge cluster to the first edge node. The second edge node (102b) comprises a count of resources more than resources required by the application executed at the first edge node (102a).
[0067] Alternatively, after checking the application requirement and the atleast one key performance indicators at the first edge node (102a) from the plurality of edge node (102a-102e), the master controller (310) is configured to assign the first resource from the nearest edge node (i.e., second edge node (102b) shown in FIG. 1) to the first edge node (102a), when the first edge node (102a) has a pre-defined latency requirement. The pre-defined latency requirement may include atleast one of a latency key performance indicator or latency related service level agreements (SLAs). The pre-defined latency requirement may be defined for each application at the edge node as a minimum latency SLA that the application may accept without compromising on the quality of experience or quality of service for the user. The nearest node (102b) is identified by the master controller (310) based on the application requirement at the first edge node (102a) and one or more KPIs of the nearest edge node (102b).
[0068] Alternatively, after checking the application requirement and the one or more key performance indicators at the first edge node (102a) from the plurality of edge node (102a-102e), the master controller (310) is configured to assign the first resource to the first edge node (102), where the first resource corresponds to the one or more resource associated with the master controller (310) in the intelligent edge cluster model.
[0069] Further, the master controller (310) is configured to instruct one or more commands to another edge node (102b-102e) in the intelligent edge cluster model for assigning one or more resources to the first edge node (102a).
[0070] Further, the master controller (310) is configured to dynamically assign a second resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node (102a), where the first resource corresponds to the one or more resource associated with the second edge node (102b), and where the second resource corresponds to the one or more resource associated with a third edge node (102c).
[0071] The master controller (310) is configured to execute instructions stored in the memory (330) and to perform various processes. The communicator (320) is configured for communicating internally between internal hardware components and with external devices via one or more networks. The memory (330) also stores instructions to be executed by the processor (110). At least one of the plurality of modules may be implemented through an AI (artificial intelligence) model. A function associated with AI may be performed through the non-volatile memory, the volatile memory, and the processor.
[0072] The master controller (310) may include one or more processors. The one or more processors may be a general purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU).
[0073] The one or more processors control the processing of the input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory. The predefined operating rule or artificial intelligence model is provided through training or learning.
[0074] Here, being provided through learning means that, by applying a learning algorithm to a plurality of learning data, the predefined operating rule or AI model of a desired characteristic is made. The learning may be performed in a device itself, and/or may be implemented through a separate server/system.
[0075] The AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values, and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights. Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.
[0076] The learning algorithm is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction. Examples of learning algorithms include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
[0077] Although FIG. 3 shows various hardware components of the cluster master edge node (104) but it is to be understood that other embodiments are not limited thereon. Alternatively, the cluster master edge node (104) may include less or more number of components. Further, the labels or names of the components are used only for illustrative purpose and does not limit the scope of the invention. One or more components can be combined together to perform same or substantially similar function in the cluster master edge node (104).
[0078] FIG. 4 is a flow chart (S400) illustrating a method for deploying the intelligent edge cluster model. The operations (S402-S408) are performed by the cluster master edge node (104). At S402, the method includes checking the application requirement and the one or more key performance indicators at the first edge node from the plurality of edge node (102a-102e). At S404, the method includes dynamically assigning the first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node. At S406, the method includes instructing one or more commands to another edge node in the intelligent edge cluster model for assigning of one or more resources to the first edge node. At S408, the method includes dynamically assigning a second resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node.
[0079] The method can be used to intelligently assign the resources of all the edge nodes in the cluster, to the UE application, based on the UE application KPIs and edge node KPIs. The proposed method is a dynamic method so that the UE application requirements and current condition of the selected edge node are checked, by checking the KPIs for the UE application and all edge nodes. It provides data for the requirements and available resources (in the shareable resource pool created by adding network resources of all the edge nodes) and further provides optimum ways to allocate edge node resources by the master edge node. Further, the method provides assigning of resources from one or more edge nodes intelligently, in a real-time manner.
[0080] The method can be used to configure a multi-service edge cluster model for a dynamic infrastructure management within the self-adaptive edge cluster-based network. The multi-service edge cluster model is deployed in the edge nodes (102a-102e) and the cluster master edge node (104). Further, the method can be used to provide a dynamic framework for an edge node cluster participation and an edge cluster infrastructure allocation by the cluster master edge node (104). The cluster master edge node (104) can be used to manage and control a dynamic edge node cluster participation and edge cluster infrastructure allocation based on a plurality of parameters. The plurality of parameters can be, for example, but not limited to the power usage of the edge node (102a-102e), a space of the edge node (102a-102e) and an ambient environmental conditions of the edge node (102a-102e), bandwidth, latency, scalability, QoS, user quality of experience, optimum resource utilization, network characteristics degradation, underlay network services, overlay network services, business demands, and service-level agreement (SLA) requirements.
[0081] The method provides checking of the edge node key performance indicators (KPIs) at the master node and adaptively assigns the resources to the user node by pulling the resources from shortest distance nodes (for stringent KPIs application/for low latency requirements) or from the master node (for high bandwidth requirements). This model provides optimum resource usage within the local edge cluster and provides flexibility to the telecom service provider, which can use basic hardware infrastructure at edge nodes.
[0082] In the proposed method, dynamic resource assignment using the virtual resource bank in the cluster is performed by assigning the resources to the application by the local edge node (if there is no resource scarcity) or by the nearest edge nodes (for low latency application /stringent QoS) or by the resource pool from master edge node (for high bandwidth application), based on edge node KPI requirements.
[0083] FIG. 5 is an example flow chart (S500) illustrating a method for managing and controlling a dynamic edge node participation and the edge cluster infrastructure allocation. The operations (S502 and S504) are performed by the cluster master edge node (104).
[0084] At S502, the method includes acquiring the plurality of parameters of edge nodes (102a-102e) in real-time and on a regular time interval. The plurality of parameters can be, for example, but not limited to the power usage of the edge node (102a-102e), a space of the edge node (102a-102e), and ambient environmental conditions of the edge node (102a-102e) bandwidth, latency, and scalability, QoS, user quality of experience, optimum resource utilization, network characteristics degradation, underlay network services, overlay network services, business demands, and the SLA requirements. At S504, the method includes managing and controlling the dynamic edge node cluster participation and edge cluster infrastructure allocation by a dynamic selection of edge host nodes and allocation of their associated network resources to the UE application. The plurality of parameters is acquired and trained over a period of time using a machine learning model.
[0085] In other words, the cluster master edge node (104) is performing comparison and analysis of the KPIs (UE application as well as edge node KPIs) – based on which the participation and allocation of edge nodes and their resources are controlled.
[0086] FIG. 6 is an example flow chart (S600) illustrating a method for dynamically selecting the edge nodes (102a-102e). The operations (S602 and S604) are performed by the cluster master edge node (104). At S602, the method includes determining the minimum resource requirement. At S604, the method includes dynamically selecting the edge nodes (102a-102e) based on the determined minimum resource requirement.
[0087] FIG. 7 is an example flow chart (S700) illustrating a method for joining a new edge host node into the cluster network. At S702, the new edge node sends a request to the cluster master edge node (104). At S704, the new edge node receives the acceptance message for the cluster master edge node (104). At S706, the new edge host node joins the cluster based on the acceptance message.
[0088] FIG. 8 is an example flow chart (S800) illustrating a method for handling resource requirements in the multi-service edge cluster connectivity architecture (1000). At S802, the method includes determining that one or more edge nodes of the edge nodes (102a-102e) is lacking the required resource. At S804, the method includes sending the request including the resource related information to the cluster master edge node (104) to fulfill temporary storage requirements. At S806, the method includes receiving the resource from the cluster storage bank, created by pooling resources of all the edge nodes, by assigning the best suitable storage infrastructure or resources to the respective edge node(s) which requested the resources (102a-102e). In other words, the resource bank is created by pooling of network resources by all the edge nodes (102a-102e). The cluster master edge node (104) also, may add its associated resources to the resource bank. Further, when one or more edge node (102a-102e) lacks the resources to support a UE application, the one or more edge node (102a-102e) requests the master edge node (104) to allocate some resources from the resource bank. In this case, the requirement of the resources is temporary, as the resources are required only to fulfill the need of the current UE application.
[0089] Further, the edge node (102a-102e) includes a processor (not shown), a communicator (not shown), and a memory (not shown). The processor is configured to execute instructions stored in the memory and to perform various processes. The communicator is configured for communicating internally between internal hardware components and with external devices via one or more networks. The memory also stores instructions to be executed by the processor.
[0090] The various actions acts, blocks, steps, or the like in the flow diagrams (S400, S500, S600, S700, and S800) may be performed in the order presented, in a different order, or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
[0091] The embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
[0092] It will be apparent to those skilled in the art that other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention. While the foregoing written description of the invention enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The invention should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope of the invention. It is intended that the specification and examples be considered as exemplary, with the true scope of the invention being indicated by the claims.
[0093] The methods and processes described herein may have fewer or additional steps or states and the steps or states may be performed in a different order. Not all steps or states need to be reached. The methods and processes described herein may be embodied in, and fully or partially automated via, software code modules executed by one or more general purpose computers. The code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all of the methods may alternatively be embodied in whole or in part in specialized computer hardware.
[0094] The results of the disclosed methods may be stored in any type of computer data repository, such as relational databases and flat file systems that use volatile and/or non-volatile memory (e.g., magnetic disk storage, optical storage, EEPROM and/or solid state RAM).
[0095] The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
[0096] Moreover, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor device, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor device can be a microprocessor, but in the alternative, the processor device can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor device can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor device includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor device can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor device may also include primarily analog components. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
[0097] The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor device, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An exemplary storage medium can be coupled to the processor device such that the processor device can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor device. The processor device and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor device and the storage medium can reside as discrete components in a user terminal.
[0098] Conditional language used herein, such as, among others, "can," "may," "might," "may," “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
[0099] Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[00100] While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it can be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the scope of the disclosure. As can be recognized, certain embodiments described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others.

CLAIMS:CLAIMS
We claim:
1. A method for deploying an intelligent edge cluster model, the intelligent edge cluster model comprising a plurality of edge nodes (102a-102e) and a master controller (310), each of the plurality of edge nodes (102a-102e) and the master controller (310) has corresponding one or more resources, each of the one or more resources corresponding to the plurality of edge nodes (102a-102e) and the master controller (310) combine to form a virtual resource pool, the virtual resource pool capable to fetch the one or more resources from any of the plurality of edge node (102a-102e) and the master controller (310), the method comprising:
checking, by the master controller (310), an application requirement and atleast one key performance indicator at a first edge node from the plurality of edge node (102a-102e); and
dynamically assigning, by the master controller (310), a first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node, based on the application requirement and the atleast one key performance indicators.

2. The method of claim 1, further comprising:
instructing, by the master controller (310), one or more commands to another edge node in the intelligent edge cluster model for assigning of one or more resources to the first edge node.

3. The method of claim 1, wherein dynamically assigning, by the master controller (310), the first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node comprising:
assigning the first resource corresponding to a second edge node in the intelligent edge cluster model, wherein the second edge node comprises count of resources more than resources required by an application executed at the first edge node.

4. The method of claim 1, wherein dynamically assigning, by the master controller (310), the first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node comprising:
assigning the first resource from a nearest edge node to the first edge node, when the first edge node has a pre-defined latency requirement, the pre-defined latency requirement includes atleast one of a latency key performance indicator, the nearest node is identified by the master controller (310) based on the application requirement at the first edge node and one or more KPIs of the nearest edge node.

5. The method of claim 1, wherein dynamically assigning, by the master controller (310), the first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node comprising:
assigning the first resource to the first edge node, wherein the first resource corresponds to the one or more resource associated with the master controller in the intelligent edge cluster model.

6. The method of claim 1, further comprising:
dynamically assigning, by the master controller (310), a second resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node,
wherein the first resource corresponds to the one or more resource associated with a second edge node, and
wherein the second resource corresponds to the one or more resource associated with a third edge node.

7. The method of claim 1, wherein the atleast one KPIs include one or more of power, space, time, and network links associated with each of the plurality of edge nodes.

8. The method of claim 1, wherein the one or more resources includes one or more of physical resources, functions, applications, and virtual machines.

9. The method of claim 1, further comprising:
determining, by the master controller (310), if the application requirement and the atleast one key performance indicator at the first edge node from the plurality of edge node is not met using the first resource;
sending, by the master controller (310), a request to assign one or more resource to a service orchestration entity (210) based on the determination, wherein the request comprises the application requirement and the atleast one key performance indicator; and
dynamically assigning, by the master controller (310), the one or more resource from the service orchestration entity (210) based on the request.

10. The method of claim 9, wherein dynamically assigning, by the master controller (310), the one or more resource from the service orchestration entity (210) comprising:
reallocating the first edge node virtually in a second cluster network by the service orchestration entity (210).

11. The method of claim 9, wherein dynamically assigning, by the master controller (310), the one or more resource from the service orchestration entity (210) comprising:
identifying a second edge cluster network to meet the application requirement and the atleast one key performance indicator at the first edge node; and
dynamically assigning, by the master controller (310), the one or more resource from another intelligent edge cluster model through the service orchestration entity (210).

12. A cluster master edge node (104) for deploying an intelligent edge cluster model, the intelligent edge cluster model comprising a plurality of edge nodes (102a-102e) and a master controller (310), each of the plurality of edge nodes (102a-102e) and the master controller (310) has corresponding one or more resources, each of the one or more resources corresponding to the plurality of edge nodes (102a-102e) and the master controller (310) combine to form a virtual resource pool, the virtual resource pool capable to fetch the one or more resources from any of the plurality of edge node (102a-102e) and the master controller (310), the cluster master edge node (104) comprising:
a memory (330);
the master controller (310), coupled with the memory (330), configured to:
check an application requirement and atleast one key performance indicator (KPI) at a first edge node from the plurality of edge node (102a-102e); and
dynamically assign a first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node, based on the application requirement and the atleast one KPI.

13. The cluster master edge node (104) of claim 12, wherein the master controller (310) is configured to instruct one or more commands to another edge node in the intelligent edge cluster model for assigning of one or more resources to the first edge node.

14. The cluster master edge node (104) of claim 12, wherein dynamically assign the first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node comprising:
assign the first resource corresponding to a second edge node in the intelligent edge cluster model, wherein the second edge node comprises count of resources more than resources required by an application executed at the first edge node.

15. The cluster master edge node (104) of claim 12, wherein dynamically assign the first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node comprising:
assign the first resource from a nearest edge node to the first edge node, when the first edge node has a pre-defined latency requirement, the pre-defined latency requirement includes atleast one of a latency key performance indicator, the nearest node is identified by the master controller (310) based on the application requirement at the first edge node and one or more KPIs of the nearest edge node.

16. The cluster master edge node (104) of claim 12, wherein dynamically assign the first resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node comprising:
assign the first resource to the first edge node, wherein the first resource corresponds to the one or more resource associated with the master controller in the intelligent edge cluster model.

17. The cluster master edge node (104) of claim 12, the master controller (310) is configured to:
dynamically assign a second resource from the one or more resources in the virtual resource pool of the intelligent edge cluster model to the first edge node,
wherein the first resource corresponds to the one or more resource associated with a second edge node, and
wherein the second resource corresponds to the one or more resource associated with a third edge node.

18. The cluster master edge node (104) of claim 12, wherein the KPIs include one or more of power, space, time, and network links associated with each of the plurality of edge nodes.

19. The cluster master edge node (104) of claim 12, wherein the one or more resources includes one or more of physical resources, functions, applications, and virtual machines.

20. The cluster master edge node (104) of claim 12, the master controller (310) is configured to:
determine if the application requirement and the atleast one key performance indicators at the first edge node from the plurality of edge node is not met using the first resource;
send a request to assign one or more resource to a service orchestration entity (210) based on the determination, wherein the request comprises the application requirement and the key performance indicator; and
dynamically assign the one or more resource from the service orchestration entity (210) based on the request.

21. The cluster master edge node (104) of claim 20, wherein dynamically assign the one or more resource from the service orchestration entity (210) comprising:
reallocate the first edge node virtually in a second edge cluster network by the service orchestration entity (210).

22. The cluster master edge node (104) of claim 20, wherein dynamically assign the one or more resource from the service orchestration entity (210) comprising:
identify a second edge cluster network to meet the application requirement and the atleast one key performance indicator at the first edge node; and
dynamically assign the one or more resource from the second edge cluster network through the service orchestration entity (210).

Documents

Application Documents

# Name Date
1 202011035654-FORM 18 [02-08-2024(online)].pdf 2024-08-02
1 202011035654-STATEMENT OF UNDERTAKING (FORM 3) [19-08-2020(online)].pdf 2020-08-19
2 202011035654-AMENDED DOCUMENTS [08-09-2021(online)].pdf 2021-09-08
2 202011035654-PROVISIONAL SPECIFICATION [19-08-2020(online)].pdf 2020-08-19
3 202011035654-POWER OF AUTHORITY [19-08-2020(online)].pdf 2020-08-19
3 202011035654-FORM 13 [08-09-2021(online)].pdf 2021-09-08
4 202011035654-FORM-26 [08-09-2021(online)].pdf 2021-09-08
4 202011035654-FORM 1 [19-08-2020(online)].pdf 2020-08-19
5 202011035654-POA [08-09-2021(online)].pdf 2021-09-08
5 202011035654-DRAWINGS [19-08-2020(online)].pdf 2020-08-19
6 202011035654-DECLARATION OF INVENTORSHIP (FORM 5) [19-08-2020(online)].pdf 2020-08-19
6 202011035654-COMPLETE SPECIFICATION [25-03-2021(online)].pdf 2021-03-25
7 202011035654-Proof of Right [25-03-2021(online)].pdf 2021-03-25
7 202011035654-DRAWING [25-03-2021(online)].pdf 2021-03-25
8 202011035654-ENDORSEMENT BY INVENTORS [25-03-2021(online)].pdf 2021-03-25
8 202011035654-FORM-26 [25-03-2021(online)].pdf 2021-03-25
9 202011035654-FORM 3 [25-03-2021(online)].pdf 2021-03-25
10 202011035654-FORM-26 [25-03-2021(online)].pdf 2021-03-25
10 202011035654-ENDORSEMENT BY INVENTORS [25-03-2021(online)].pdf 2021-03-25
11 202011035654-Proof of Right [25-03-2021(online)].pdf 2021-03-25
11 202011035654-DRAWING [25-03-2021(online)].pdf 2021-03-25
12 202011035654-DECLARATION OF INVENTORSHIP (FORM 5) [19-08-2020(online)].pdf 2020-08-19
12 202011035654-COMPLETE SPECIFICATION [25-03-2021(online)].pdf 2021-03-25
13 202011035654-POA [08-09-2021(online)].pdf 2021-09-08
13 202011035654-DRAWINGS [19-08-2020(online)].pdf 2020-08-19
14 202011035654-FORM-26 [08-09-2021(online)].pdf 2021-09-08
14 202011035654-FORM 1 [19-08-2020(online)].pdf 2020-08-19
15 202011035654-POWER OF AUTHORITY [19-08-2020(online)].pdf 2020-08-19
15 202011035654-FORM 13 [08-09-2021(online)].pdf 2021-09-08
16 202011035654-PROVISIONAL SPECIFICATION [19-08-2020(online)].pdf 2020-08-19
16 202011035654-AMENDED DOCUMENTS [08-09-2021(online)].pdf 2021-09-08
17 202011035654-STATEMENT OF UNDERTAKING (FORM 3) [19-08-2020(online)].pdf 2020-08-19
17 202011035654-FORM 18 [02-08-2024(online)].pdf 2024-08-02