Abstract: [0001] The present disclosure provides a method and an edge orchestrator platform (100) for providing a converged network infrastructure. The edge orchestrator platform (100) is connected to a global service orchestrator (1102a). The edge orchestrator platform (100) includes a multi access controller (110) and an intelligent allocation unit (120). The multi access controller (110) is connected to a plurality of edge nodes (200), a plurality of network controllers (300), and a plurality of user devices (400). The plurality of user devices (400) is connected to the plurality of network controllers (300). The plurality of network controllers (300) corresponds to one or more last mile networks (1132). The intelligent allocation unit (120) dynamically allocates resources from the plurality of edge nodes (200) to one or more applications at the plurality of network controllers (300). FIG. 2
TECHNICAL FIELD
[0001] The present disclosure relates to communication systems, and more specifically relates to a method and an edge orchestrator platform for providing a converged network infrastructure.
BACKGROUND
[0002] In general, an edge orchestration is a modern management and orchestration platform that provides enterprise grade solutions to a management of edge deployment across both enterprise user's edges and telecom service provider edges. In existing methods and systems, mesh controllers within an open edge orchestrator provides mesh services and utilization of resources from multiple edge nodes. The existing methods and systems are not flexible enough to switch between multiple network controllers. Further, the existing methods and systems do not provide efficient resource utilization and involve high latency computation.
[0003] For example, a prior art reference "US20170048308A1" discloses a method and apparatus for network conscious edge-to-cloud data aggregation, connectivity, analytics and actuation operate for the detection and actuation of events based on sensed data, with the assistance of edge computing software-defined fog engine with interconnect with other network elements via programmable internet exchange points to ensure end-to-end virtualization with cloud data centers and hence, resource reservations for guaranteed quality of service in event detection.
[0004] Another prior art reference "WO2017035536A1" discloses a method for enabling intelligence at the edge. Features include: triggering by sensor data in a software layer hosted on either a gateway device or an embedded system. Software layer is connected to a local-area network. A repository of services, applications, and data processing engines is made accessible by the software layer. Matching the sensor data with semantic descriptions of occurrence of specific conditions through an expression language made available by the software layer. Automatic discovery of pattern events by continuously executing expressions. Intelligently composing services and applications across the gateway
device and embedded systems across the network managed by the software layer for chaining applications and analytics expressions. Optimizing the layout of the applications and analytics based on resource availability. Monitoring the health of the software layer. Storing of raw sensor data or results of expressions in a local time-series database or cloud storage. Services and components can be containerized to ensure smooth running in any gateway environment.
[0005] In view of the above discussion and prior art references, there exists a need for a dynamic selection of network controllers corresponding to different last mile networks and respective adjustment of workloads on different edge nodes to serve an application.
[0006] Any references to methods, apparatus or documents of the prior art are not to be taken as constituting any evidence or admission that they formed, or form part of the common general knowledge.
OBJECT OF THE DISCLOSURE
[0007] A principal object of the present disclosure is to disclose a method and an edge orchestrator platform for providing a converged network infrastructure.
[0008] Another object of the present disclosure is to provide the edge orchestrator platform that performs dynamic switching of network controllers (corresponding to different last mile networks) and respective adjustment of workloads on different edge nodes, for efficient resource allocation from edge nodes.
[0009] Another object of the present disclosure is to provide a dynamic selection of network controllers corresponding to different last mile networks and respective adjustment of workloads on different edge nodes to serve an application.
[0010] Another object of the present disclosure is to push data and application from a master orchestrator to an open edge orchestrator platform.
[0011] Another object of the present disclosure is to connect with a plurality of edge nodes and a plurality of network controllers, where each network controller corresponds to a different network technology.
[0012] Another object of the present disclosure is to utilize resources from a plurality of edge nodes intelligently by dynamically allocating and freeing resources based on resource demand by a plurality of applications.
[0013] Another object of the present disclosure is to interact with the plurality of network controllers corresponding to the different network technologies for dynamically selecting the required network controller for serving the application.
[0014] Another object of the present disclosure is to adjust workloads on different edge nodes to provide services to an end user via the network controller by enabling one or more vendor independent Application Programming Interface (APIs).
[0015] Another object of the present disclosure is to switch between multiple network controllers in a flexible manner, so as to achieve an efficient resource utilization and a low latency computation in the edge orchestrator platform.
SUMMARY
[0016] In an aspect, an edge orchestrator platform is provided for a converged network infrastructure. The orchestrator platform is connected to a global service orchestrator. The edge orchestrator platform includes a multi access controller and an intelligent allocation unit. The multi access controller is connected to a plurality of edge nodes, a plurality of network controllers, and a plurality of user devices. The plurality of user devices is connected to the plurality of network controllers. The plurality of network controllers corresponds to one or more last mile networks. The intelligent allocation unit dynamically allocates resources from the plurality of edge nodes to one or more applications at the plurality of network controllers.
[0017] The edge orchestrator platform comprises a switching unit configured to switch a network connection from a first network controller to a second network controller in the plurality of network controllers. The edge orchestrator platform comprises a resource unit configured to push one or more network resources from the global service orchestrator to at least one of the plurality of edge nodes. Further, the edge orchestrator platform acts as a local orchestrator to the plurality of edge nodes and the plurality of network controllers.
[0018] The one or more last mile networks includes at least one of passive optical network, a radio access network, and a Wireless Fidelity (Wi-Fi) network.
[0019] In another aspect, a method is provided for a converged network infrastructure using an edge orchestration platform. The edge orchestrator platform is connected to a global service orchestrator. The method includes connecting with a plurality of edge nodes, a plurality of network controllers, and a plurality of user devices connected to the plurality of network controllers. The plurality of network controllers corresponds to one or more last mile networks.
[0020] The method includes pushing one or more resources from the global service orchestrator to the edge orchestration platform. The method includes dynamically allocating one or more resources from the plurality of edge nodes to one or more applications at the plurality of network controllers. The method includes communicating with the plurality of network controllers, the plurality of network controller includes at least one or more of passive optical network, radio access network, and a Wi-Fi network. Further, the method includes dynamically selecting a network controller from the plurality of network controllers for facilitating one or more application.
[0021] Further, the method includes adjusting one or more workloads on at least one of the plurality of edge nodes by enabling one or more vendor independent APIs.
[0022] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and
numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF FIGURES
[0023] The method and system are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various drawings. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[0024] FIG. 1 is a block diagram of an edge orchestrator platform for providing a converged network infrastructure.
[0025] FIG. 2 is a flow chart illustrating a method for providing the converged network infrastructure.
[0026] FIG. 3a and FIG. 3b illustrate an overview of an example architecture implemented for the converged network infrastructure.
DETAILED DESCRIPTION
[0027] In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a thorough understanding of the embodiment of invention. However, it will be obvious to a person skilled in the art that the embodiments of the invention may be practiced with or without these specific details. In other instances, well known methods, procedures and components have not been described in details so as not to unnecessarily obscure aspects of the embodiments of the invention.
[0028] Furthermore, it will be clear that the invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions and equivalents will be apparent to those skilled in the art, without parting from the scope of the invention.
[0029] The accompanying drawings are used to help easily understand various technical features and it should be understood that the embodiments
presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
[0030] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. The term "or" as used herein, refers to a non-exclusive or, unless otherwise indicated. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein can be practiced and to further enable those skilled in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0031] The present disclosure provides edge orchestrator platform to provide a converged network infrastructure. The orchestrator platform is connected to a global service orchestrator. The edge orchestrator platform includes a multi access controller and an intelligent allocation unit. The multi access controller is connected to a plurality of edge nodes, a plurality of network controllers, and a plurality of user devices. The plurality of user devices is connected to the plurality of network controllers. The plurality of network controllers corresponds to one or more last mile networks. The intelligent allocation unit dynamically allocates resources from the plurality of edge nodes to one or more applications at the plurality of network controllers.
[0032] Referring now to the drawings, and more particularly to FIGS. 1 through 3b.
[0033] FIG. 1 is a block diagram of an edge orchestrator platform (100) for providing a converged network infrastructure. The converged network infrastructure structures one or more wireless communication system which groups multiple components into a single optimized computing package. The edge orchestrator platform (100) includes a multi access controller (110), an intelligent allocation unit (120), a switching unit (130), a resource unit (140), a processor (150), and a memory (160). The processor (150) is coupled with the multi access controller (110), the intelligent allocation unit (120), the switching unit (130), the resource unit (140), and the memory (160).
[0034] The multi access controller (110) is connected to a plurality of edge nodes (200), a plurality of network controllers (300), and a plurality of user devices (400). The plurality of user devices (400) is connected to the plurality of network controllers (300). The plurality of edge nodes (200) may be, for example, but not limited to, on-premise server edges, access server edges, regional server edges or the like. The plurality of edge nodes (200) may be a generic way of referring to any edge device, an edge server, or an edge gateway on which edge computing can be performed. The plurality of user devices (400) may be, for example, but not limited to, smart phones, smart watches, smart TVs, smart washing machines, Personal Digital Assistants (PDAs), tablet computers, laptop computers, virtual reality devices, immersive systems, and Internet of Things (IoTs). The plurality of network controllers (300) may be, for example, but not limited to, Fiber to the-X (FTTx) controllers, Wi-Fi controllers, open RAN controllers or the like. The FTTx controllers may be, for example, but not limited to, a fiber to the home and a fiber to the premises. The plurality of network controllers (300) corresponds to one or more last mile networks (1132) (as shown in FIG. 3b). The one or more last mile networks (1132) may be, for example, but not limited to, a passive optical network (1132a), a radio access network (1132c), and a Wireless Fidelity (Wi-Fi) network (1132b).
[0035] The multi access controller (110) may be implemented by analog or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active
electronic components, optical components, hardwired circuits, or the like, and may optionally be driven by firmware.
[0036] The intelligent allocation unit (120) may be configured to dynamically allocate one or more resources from the plurality of edge nodes (200) to one or more applications at the plurality of network controllers (300). The resource may be, for example, but not limited to, a physical resource, a function, virtual machine, application programming interface, virtual functions or the like. The function may be, for example, but not limited to, a network function, a service virtualization function, a resource management function, a node management function. The application may be, for example, but not limited to, a virtual reality (VR) application, an enterprise application, a content delivery application, a gaming application, a networking application or the like.
[0037] The intelligent allocation unit (120) may be implemented by analog or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits, or the like, and may optionally be driven by firmware.
[0038] The switching unit (130) may be configured to switch a network connection i.e., from a first network controller to a second network controller in the plurality of network controllers (300). The switching of the network connection from the first network controller to the second network controller occurs based on demand. The switching unit (130) may be implemented by analog or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits, or the like, and may optionally be driven by firmware.
[0039] Further, the resource unit (140) may be configured to push one or more network resources from a global service orchestrator (1102a) to at least one of the plurality of edge nodes (200). The resource unit (140) may be implemented by analog or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic
components, active electronic components, optical components, hardwired circuits, or the like, and may optionally be driven by firmware.
[0040] The edge orchestrator platform (100) acts as a local orchestrator to the plurality of edge nodes (200) and the plurality of network controllers (300). Further, the edge orchestrator platform (100) is connected to a global service orchestrator (GSO) (1102a). The GSO (1102a) receives a service order request from a self-service portal. Based on a model-driven service design concept, the GSO (1102a) implements a rapid conversion of user orders to network resources (e.g., Software-defined networking (SDN)/network virtual function (NFV) resources, etc.), and provides an entire process management of automatic service fulfilment and assurance. Further, The GSO (1102a) provides orchestration capabilities across vendors, platforms, virtual and physical networks. The GSO (1102a) provides full lifecycle management and assurance for services based on closed-loop policy control. The closed-loop policy control is defined by a service provider. Further, the GSO (1102a) provides a unified capability exposure interface to accelerate service innovation and new services onboarding in the edge orchestrator platform (100). The operations and functions of the global service orchestrator (1102a) are explained in FIG. 3a and FIG. 3b.
[0041] The edge orchestrator platform (100) performs dynamic switching of the plurality of network controllers (300) (corresponding to different last mile networks) and respective adjustment of workloads on the plurality of edge nodes (200) for efficient resource allocation from the plurality of edge nodes (200). Further, the edge orchestrator platform (100) adjusts one or more workloads on at least one of the plurality of edge nodes (200) by enabling one or more vendor independent APIs. The vendor independent API is a publicly available application programming interface that provides developers with programmatic access to a proprietary software application or web service.
[0042] Further, the edge orchestrator platform (100) acts as a local orchestrator to the plurality of edge nodes (200) and connects with the global service orchestrator (1102a), so as to provide dynamic edge slicing, allocation of resources to last mile connectivity network, switching of network and other edge
services to the end users in an efficient manner. In an example, high bandwidth requirement applications such as Augmented Reality (AR), Virtual Reality (VR) or vehicle to vehicle (V2V) communication that are required to be processed with ultra-low latency may benefit a lot using the proposed edge orchestrator platform (100), as all the intelligence and required application data for facilitating services (e.g., resource allocation, switching of networks) are pushed at the local edge nodes (200), by the GSO (1102a).
[0043] Unlike conventional systems, in the proposed edge orchestrator platform (100), the multi access controller (110) is connected with multiple network controllers (300) and the plurality of edge nodes (200) to enable switching of connection between different networks.
[0044] The proposed edge orchestrator platform (100) supports large number of edge clouds and manages the network edge, the on-premise edge, and an enterprise edges in a consistent manner. The proposed edge orchestrator platform (100) supports various types of applications and services that are required to be supported in a cloud native architecture. Further, the proposed edge orchestrator platform (100) manages dynamic configuration of various edge nodes, creates dynamic network slices, and provides a live migration supports. In the proposed edge orchestrator platform (100), the applications at the edges can sit in multiple cloud infrastructure, for example, Amazon Web Service (AWS)®, Azure®, On-premises software, a Google Cloud Platform (GCP)®, and Telco Cloud® etc.
[0045] The proposed edge orchestrator platform (100) may support modular architecture, highly programmable via network APIs and policy management. The proposed edge orchestrator platform (100) may support real time processing and communication between distributed endpoints and creates the need for efficient processing at the network edge. The proposed edge orchestrator platform (100) may be implemented in augmented and virtual reality system, autonomous cars, drones, IOT with smart cities.
[0046] The edge orchestrator platform (100) may support high degrees of automation and may be able to adapt and perform as traffic volume and
characteristics change. The edge orchestrator platform (100) may increase the value by reducing cycle times, delivering security/performance/reliability and cost performance.
[0047] The edge orchestrator platform (100) may provision infrastructure required to set up day 0 environment. The edge orchestrator platform (100) may be operated with K8 Clusters to deploy workloads, registers the Kubernetes cluster along with credentials. In the edge orchestrator platform (100), as workloads be placed across different edge nodes, networks supporting the same may also be created and terminated dynamically. As the edge orchestrator platform (100) may support multiple application providers, the edge orchestrator platform (100) may be required to support multi-tenant environment to keep data and operations separate. The edge orchestrator platform (100) may help in creating composite applications and associating multiple applications. The composite application may be instantiated for different purposes and this to be supported through profiles. The edge orchestrator platform (100) may select right locations to place a constituent application for the composite application. In order to create additional resources to be deployed, the edge orchestrator platform (100) may modify the resources created so far and delete existing resources. With a deployment intent support, the edge orchestrator platform (100) may be able instantiate and terminate the application and also may be able to make upgrades to run the composite application. The edge orchestrator platform (100) may collect various metrics of each service and may provide a way for training and inference to do closed loop automations.
[0048] The edge orchestrator platform (100) may use less number of resources. The edge orchestrator platform (100) may be operated under cloud native microservices principles. The edge orchestrator platform (100) may use Helm Based Kubernetes deployments. The Helm Based Kubernetes Deployment is used to tell Kubernetes how to create or modify instances of the pods that hold a containerized application. The deployments can scale the number of replica pods, enable rollout of updated code in a controlled manner, or roll back to an earlier deployment version if necessary. In the edge orchestrator platform (1000), Cloud
Native Computing Foundation (CNCF) projects are used for logging, tracing and metric monitoring and a stateless design used for a distributed lock.
[0049] The edge orchestrator platform (100) may be able to address large number of edge clouds, switches at edges, and support various edge controller technologies. The edge orchestrator platform (100) may support infrastructure verifications and secure secrets/keys. The edge orchestrator platform (100) has very low latency, high performance, performance determinism, data reduction, and lesser utilization of resources. The edge orchestrator platform (100) is easy to upgrade and provide quick bring up of the edge clouds. The edge orchestrator platform (100) provides better traffic redirection and contextual information. Below table 1 indicates the edge orchestrator platform (100) requirements:
Requirements Edge orchestrator platform (100)
Scalability Optimization needed to address large number of edge-clouds - Edge
Cloud Provider - Parent-Child Open Network Automation Platform (ONAP)(Distributed/Delegated - domain orchestration), Fabric Control, Closed loop
Security Mutual Transport Layer Security (TLS) with Edges, Secrets/keys protection, hardware rooted security, verification of edge stack, centralized security for Function as a service (FaaS)
Performance Containerized Virtual Network Functions (VNFs), Single-root input/output virtualization (SRIOV)-NIC, Field-programmable gate array (FPGA)-NIC support
Edge App provisioning Create Edge App/Service, instantiate Edge APP/Service, provide Edge App/Service status, Edge App/Service Analytics
Analytics Aggregation of statistics & machine learning (ML) analytics for various edge deployments
Container & Create Cloud-native Network Function (CNFs)/VNFs,
VNF Deployments
instantiate CNFs/VNFs, CNFs and VNFs Analytics
Table 1
[0050] The processor (150) is configured to execute instructions stored in the memory (160) and to perform various processes. The communicator (not shown) is configured for communicating internally between internal hardware components and with external devices via one or more networks. The memory (160) also stores instructions to be executed by the processor (150).
[0051] At least one of the plurality of modules may be implemented through an AI (artificial intelligence) model. A function associated with AI may be performed through a non-volatile memory, a volatile memory, and the processor. The processor (150) may include one or more processors. The one or more processors may be a general purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an Al-dedicated processor such as a neural processing unit (NPU).
[0052] The one or more processors control processing of the input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory. The predefined operating rule or artificial intelligence model is provided through training or learning, such as by applying a learning algorithm to a plurality of learning data, a predefined operating rule or AI model of a desired characteristic is made. The learning may be performed in a device and/or may be implemented through a separate server/system. The AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values, and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights. Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.
[0053] The learning algorithm is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction. Examples of learning algorithms include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
[0054] Although FIG. 1 shows various hardware components of the edge orchestrator platform (100), but it is to be understood that other aspects are not limited thereon. In other implementations, the edge orchestrator platform (100) may include less or more number of components. Further, the labels or names of the components are used only for illustrative purpose and does not limit the scope of the invention. One or more components can be combined together to perform same or substantially similar function in the edge orchestrator platform (100).
[0055] FIG. 2 is a flow chart (S200) illustrating a method for providing the converged network infrastructure. The operations (S202-S208) are performed by the edge orchestrator platform (100). At S202, the method includes connecting with the plurality of edge nodes (200), the plurality of network controllers (300), and the plurality of user devices (400) connected to the plurality of network controllers (300). At S204, the method includes pushing the one or more resources from the global service orchestrator (1102a) to the edge orchestration platform (100). At S206, the method includes dynamically allocating the one or more resources from the plurality of edge nodes (200) to one or more applications at the plurality of network controllers (300). At S208, the method includes adjusting the one or more workloads on at least one of the plurality of edge nodes (200) by enabling one or more vendor independent APIs.
[0056] The various actions, acts, blocks, steps, or the like in the flow chart (S200) may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
[0057] FIG. 3a and FIG. 3b illustrate an overview of an example architecture (1000) implemented for the converged network infrastructure. The architecture (1000) includes an external entity (1102), a self-service and reporting UI portal (1104), an application profiles manager (1106), a policy manager (1108), an application/service market place manager (1110), a workflow manager (1112), a data collection/distribution manager (1114), an application manager (1116), a network function life cycle manager (1118), a machine learning inference manager (1120), a multi access controller manager (1122), an edge management controller (1124), a centralized data center server (1128), the last mile network (1132) and the plurality of user devices (400).
[0058] Further, the external entity (1102) includes a global service orchestrator (1102a), a big data analytics manager (1102b), and a machine learning engine (1102c). The data collection/distribution manager (1114) is coupled and operated with the global service orchestrator (1102a), the big data analytics manager (1102b), and the machine learning engine (1102c). The data collection/distribution manager (1114) may enable an extensive telemetry needed to support logging, monitoring and tracing of edge cloud components. The data collection/distribution manager (1114) may distribute the data to the global service orchestrator (1102a), the big data analytics manager (1102b), and the machine learning engine (1102c). The data collection/distribution manager (1114) may have support for both real time and batch processing of data.
[0059] The self-service and reporting UI portal (1104) may provide the users with an intuitive role based user interface (UI) access to various edge management services. Also, in the self-service and reporting UI portal (1104), a dashboard may facilitate a real time monitoring and reporting of various KPIs running in the edge cloud.
[0060] The global service orchestrator (1102a) may be responsible for maintaining an overall view of a multi-access edge computing (MEC) system based on a deployed MEC host, resources, MEC services, and topology. Further, the global service orchestrator (1102a) may also be responsible for on-boarding of application packages, including checking an integrity and authenticity of the
application packages, validating application rules and requirements and if necessary adjusting them to comply with operator policies, keeping a record of on-boarded application packages, and preparing the virtualization infrastructure manager(s) to handle the applications. Further, the global service orchestrator (1102a) may be responsible for selecting appropriate MEC host(s) for application instantiation based on constraints, such as latency, available resources, and available services. Furthermore, the global service orchestrator (1102a) may be responsible for triggering application instantiation and termination and triggering application relocation as needed when supported.
[0061] The big data analytics manager (1102b) may handle huge data received from the plurality of edge nodes (200) and the plurality of user devices (400) in the architecture (1000), so as to reveal trends, hidden patterns, unseen correlations, and achieve automated decision making in the architecture (1000). The operations and functions of the machine learning, by using the machine learning engine (1102c), is already explained in FIG. 1.
[0062] The workflow manager (1112) may be coupled and operated with the application manager (1116), the network function life cycle manager (1118), and the machine learning inference manager (1120). The application manager (1116) manages various application details. The network function life cycle manager (1118) manages a function life cycle of a network. The machine learning inference manager (1120) updates the machine learning models running on the plurality of edge nodes (200) and manages different functions of the plurality of edge nodes (200). Further, the machine learning inference manager (1120) may provide a catalog service for the deployment of edge inference models on both enterprise user and service provider edges. The multi access controller manager (1122) may be coupled and operated with the network function life cycle manager (1118) and the machine learning inference manager (1120). Further, the application manager (1116) may be coupled and operated with the edge management controller (1124).
[0063] The application profiles manager (1106) may handle the application profiles received from the plurality of edge nodes (200) and the
plurality of user devices (400). The policy manager (1108) defines one or more policies for various entities operated in the architecture (1000). Further, the policy manager (1108) may enable the support for business level rules driven support which is required to put control policies to management edge application/ services and infrastructure resources. Also, the policy manager (1108) may enable the support for the composite application that may be instantiated for different purposes through application profiles. The one or more policies are defined by the service provider and/or a user. The application/service market place manager (1110) may handle a list of enabled services from which users can provision resources in the plurality of edge nodes (200).
[0064] The multi access controller manager (1122) may include a centralized configuration manager (1122a), a topology manager (1122b), an intent manager (1122c), a service provider registry manager (1122d), a state event manager (1122e) and a high availability manager (1122f). The intent manager (1122c) may enable a support for placement of workloads in the right edge locations and this is completely intent driven. With the intent derived from the user requested, applications can be dynamically deployed or modified in the edge locations.
[0065] The topology manager (1122b) determines what topology will be suitable for an application? Some of the options may include, but not limited to, mesh networking protocols, such as Zigbee or DigiMesh, as well as point-to-point or point-to-multipoint supports. The service provider registry manager (1122d) may provide information about registry along with location and status of the edge node and configuration properties.
[0066] The edge management controller (1124) may include a multi cloud manager (1124a), a service mesh manager (1124b), and a cluster manager (1124c). The edge management controller (1124) may be operated and coupled with the edge node (200) and the centralized data center server (1128). The centralized data center server (1128) may be, for example, but not limited to, the Amazon Web Service (AWS)®, Azure®, On-premises software, a Google Cloud Platform (GCP)®, and Telco Cloud®. The edge node (200) may be, for example,
but not limited to, the on-premise server edges (1126a), the access server edges (1126b), the regional server edges (1126c). The regional server edges (1126c) may be Kubernetes and OpenStack. The Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management purpose. The OpenStack is a standard cloud computing platform, mostly deployed as infrastructure-as-a-service in both public and private clouds where virtual servers and other resources are made available to users. In an example, the access edge server (1126b) may provide a service that gives users a trusted connection for inbound and outbound traffic.
[0067] The multi access controller manager (1122) is configured to switch the network connection, for example, from a first network controller (i.e., open RAN controller (1130c)) to the second network controller (i.e., pWiFi controller (1130b)) in the plurality of network controllers. The plurality of network controllers may be, for example, but not limited to, the open RAN controller (1130c), the pWiFi controller (1130b), the pFTTx controller (1130a), an SDWAN controller (1130d), a network core controller (1130e) and an open transport controller (1130f). The pFTTx controller (1130a) may be connected with the pFTTx network (1132a). The pWiFi controller (1130b) may be coupled with the pWiFi network (1132b). The open RAN controller (1130c) may be coupled with the radio access network (1132c). The SDWAN controller (1130d) may be coupled with the SDWAN network (1134). The open transport controller (1130f) may be coupled with the open transport network (1136).
[0068] Further, the architecture (1000) may include an edge management support component, an edge slicing support component, a multi-tenant support component, a network functions deployment support component and an application management and deployment support component. The edge management support component supports management of various types of edge infrastructure i.e. On Premise, Telco Network Edge, Cloud Provider Edge like AWS, Azure and GCP. The edge management service component supports Day 0 infrastructure provisioning and Day 1 provisioning of Kubernetes clusters in the
edge node. This also enables the support for dynamic provisioning of large scale clusters, network and security management.
[0069] The edge slicing support component supports dynamic slicing requirements configuration across edge deployments for various consumer edge services. The multi-tenant support component supports multiple application and services providers with optimized common edge infrastructure resources, this enables data, infrastructure operation separate across enterprise users.
[0070] The network functions deployment support component enables the deployment and management of network functions for example: UPF to enable edge application traffic steering to the core network services. Similarly, this service enables the support for other network services functions required to leverage edge computing environments. The application management and deployment support component enable the support for composite cloud native application deployment and its lifecycle management. This component also accelerates the developer velocity of dynamic and consistent deployment across edge infrastructures.
[0071] The embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
[0072] It will be apparent to those skilled in the art that other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention. While the foregoing written description of the invention enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The invention should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope of the invention. It is intended that the specification and examples be considered as exemplary, with the true scope of the invention being indicated by the claims.
[0073] The methods and processes described herein may have fewer or additional steps or states and the steps or states may be performed in a different order. Not all steps or states need to be reached. The methods and processes described herein may be embodied in, and fully or partially automated via, software code modules executed by one or more general purpose computers. The code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all of the methods may alternatively be embodied in whole or in part in specialized computer hardware.
[0074] The results of the disclosed methods may be stored in any type of computer data repository, such as relational databases and flat file systems that use volatile and/or non-volatile memory (e.g., magnetic disk storage, optical storage, EEPROM and/or solid state RAM).
[0075] The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
[0076] Moreover, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor device, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor device can be a microprocessor, but in the alternative, the
processor device can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor device can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor device includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor device can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor device may also include primarily analog components. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
[0077] The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor device, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An exemplary storage medium can be coupled to the processor device such that the processor device can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor device. The processor device and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor device and the storage medium can reside as discrete components in a user terminal.
[0078] Conditional language used herein, such as, among others, "can," "may," "might," "may," "e.g.," and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain alternatives include, while other alternatives do not include,
certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more alternatives or that one or more alternatives necessarily include logic for deciding, with or without other input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular alternative. The terms "comprising," "including," "having," and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term "or" is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term "or" means one, some, or all of the elements in the list.
[0079] Disjunctive language such as the phrase "at least one of X, Y, Z," unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain alternatives require at least one of X, at least one of Y, or at least one of Z to each be present.
[0080] While the detailed description has shown, described, and pointed out novel features as applied to various alternatives, it can be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the scope of the disclosure. As can be recognized, certain alternatives described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others.
CLAIMS
We Claim:
1. An edge orchestrator platform (100) for providing a converged network
infrastructure, wherein the orchestrator platform (100) is connected to a global
service orchestrator (1102a), the edge orchestrator platform (100) comprising:
a multi access controller (110) connected to a plurality of edge nodes (200), a plurality of network controllers (300), and a plurality of user devices (400) connected to the plurality of network controllers (300), wherein the plurality of network controllers (300) corresponds to one or more last mile networks (1132); and
an intelligent allocation unit (120) for dynamic allocation of resources from the plurality of edge nodes (200) to one or more applications at the plurality of network controllers (300).
2. The edge orchestrator platform (100) of claim 1, further comprising:
a switching unit (130) configured to switch a network connection from a first network controller to a second network controller in the plurality of network controllers (300).
3. The edge orchestrator platform (100) of claim 1, further comprising:
a resource unit (140) configured to push one or more network resources from the global service orchestrator (1102a) to at least one of the plurality of edge nodes (200).
4. The edge orchestrator platform (100) of claim 1 acts as a local orchestrator to the plurality of edge nodes (200) and the plurality of network controllers (300).
5. The edge orchestrator platform (100) of claim 1, wherein the one or more last mile networks (1132) include at least one of passive optical network (1132a), a
radio access network (1132c), and a Wireless Fidelity (Wi-Fi) network (1132b).
6. The edge orchestrator platform (100) of claim 1 adjusts one or more workloads on at least one of the plurality of edge nodes (200) by enabling one or more vendor independent application programming interfaces (APIs).
7. A method for providing a converged network infrastructure using an edge orchestration platform (100), wherein the edge orchestrator platform (100) is connected to a global service orchestrator (1102a), the method comprising:
connecting, by the edge orchestrator platform (100), with a plurality of edge nodes (200), a plurality of network controllers (300), and a plurality of user devices (400) connected to the plurality of network controllers (300), wherein the plurality of network controllers (300) corresponds to one or more last mile networks (1132).
8. The method of claim 7, further comprising:
pushing, by the edge orchestrator platform (100), one or more resources from the global service orchestrator (1102a) to the edge orchestration platform (100).
9. The method of claim 7, further comprising:
dynamically allocating, by the edge orchestrator platform (100), one or more resources from the plurality of edge nodes (200) to one or more applications at the plurality of network controllers (300).
10. The method of claim 7, further comprising:
communicating, by the edge orchestrator platform (100), with the plurality of network controllers (300), the plurality of network controller (300) includes at least one or more of passive optical network (1132a), a radio access network (1132c), and a Wi-Fi network (1132b); and
dynamically selecting, by the edge orchestrator platform (100), a network controller from the plurality of network controllers for facilitating one or more application.
11. The method of claim 7, further comprising:
adjusting, by the edge orchestrator platform (100), one or more workloads on at least one of the plurality of edge nodes (200) by enabling one or more vendor independent application programming interfaces (APIs).
| # | Name | Date |
|---|---|---|
| 1 | 202111012567-STATEMENT OF UNDERTAKING (FORM 3) [23-03-2021(online)].pdf | 2021-03-23 |
| 2 | 202111012567-POWER OF AUTHORITY [23-03-2021(online)].pdf | 2021-03-23 |
| 3 | 202111012567-FORM 1 [23-03-2021(online)].pdf | 2021-03-23 |
| 4 | 202111012567-DRAWINGS [23-03-2021(online)].pdf | 2021-03-23 |
| 5 | 202111012567-DECLARATION OF INVENTORSHIP (FORM 5) [23-03-2021(online)].pdf | 2021-03-23 |
| 6 | 202111012567-COMPLETE SPECIFICATION [23-03-2021(online)].pdf | 2021-03-23 |
| 7 | 202111012567-Request Letter-Correspondence [24-09-2021(online)].pdf | 2021-09-24 |
| 8 | 202111012567-Power of Attorney [24-09-2021(online)].pdf | 2021-09-24 |
| 9 | 202111012567-Covering Letter [24-09-2021(online)].pdf | 2021-09-24 |
| 10 | 202111012567-REQUEST FOR CERTIFIED COPY [16-09-2022(online)].pdf | 2022-09-16 |
| 11 | 202111012567-Proof of Right [16-09-2022(online)].pdf | 2022-09-16 |
| 12 | 202111012567-FORM 18 [10-03-2025(online)].pdf | 2025-03-10 |
| 13 | 202111012567-RELEVANT DOCUMENTS [21-04-2025(online)].pdf | 2025-04-21 |
| 14 | 202111012567-Proof of Right [21-04-2025(online)].pdf | 2025-04-21 |
| 15 | 202111012567-POA [21-04-2025(online)].pdf | 2025-04-21 |
| 16 | 202111012567-FORM-5 [21-04-2025(online)].pdf | 2025-04-21 |
| 17 | 202111012567-FORM 13 [21-04-2025(online)].pdf | 2025-04-21 |
| 18 | 202111012567-ENDORSEMENT BY INVENTORS [21-04-2025(online)].pdf | 2025-04-21 |