Abstract: The present invention discloses a method for connecting a plurality of data centers (200a-200n) in a non-hierarchical network (1000) using a two-cut-not-out network configuration. The method includes identifying, by a network node (100), a plurality of physical connection paths among a plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n). Further, the method includes selecting, by the network node (100), at least three paths based on a predefined path selection criteria. Further, the method includes connecting, by the network node (100), the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) using the at least three paths. Further, the method includes creating, by the network node (100), the non-hierarchical network (1000) among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) using the at least three paths.
FIELD OF INVENTION
[0001] The present disclosure relates to a method and system for determining optimal communication path arrangements for a data center.
BACKGROUND OF INVENTION
[0002] Currently, all data center (DC) operators are using their existing telecommunications (Telco)/Internet service provider (ISP) network bandwidth/fiber core across city for interconnectivity of other DC/cable landing station. Telco networks are designed as per mobility requirements, the Telco networks run mostly in a ring topology and have network isolation issue in case of dual cut in the network. Other than number of midspan in the Telco networks and regular planned outage, facing frequent fiber cut on regular basis. Over the period, most of the sections converted into aerial from an underground (UG) due to road and utility expansion activity in the city area.
[0003] Even though, Telcos assure exclusive network and links. In many cases, on ground lot of elements such as Fiber Distribution Management System (FDMS), Dense Wavelength Division Multiplexing (DWDM) Sub rack, Optical Transport Network (OTN) Controller shelf, Multiprotocol Label Switching (MPLS) sub rack are shared, which carries a risk of outages due to hardware failure and human error. Immediate bandwidth provision is always a constraint for Telco as same network resource is shared by both mobility, small, medium and big enterprise connectivity. There is always an internal preference with respect to escalation, network criticality and pricing.
[0004] Further, various network planning methods for the datacenter are available. In an example, network planning methods are achieved using survey data, such as, Light Detection and Ranging (LiDAR), 360 degree camera and ground penetrating radar (GPR) data. Further, during the network planning methods, determining an optimal path between two geographical locations based on different models such as cost model, repair rate model etc., is also available. However, the conventional models cannot ensure 99.99% uptime of the network, which is an essential component for datacenter design.
[0005] Thus, it is desired to address the above mentioned disadvantages or other shortcomings or at least provide a useful alternative.
OBJECT OF INVENTION
[0006] The principal object of the present invention is to provide a method and system for providing a network planning technique for inter connecting data centers by using minimum three paths and creating a non-hierarchical network including the data centers to ensure better uptime of the non-hierarchical network.
[0007] Another object of the present invention is to choose a path selection criterion (e.g., shortest physical path, end to end distinct physical path, a zero overlapping path or a crisscross path during physical path selection).
[0008] Another object of the present invention is to select a path at which chances of future digging or development work by other authorities is also less or minimal.
SUMMARY
[0009] Accordingly, the present invention herein discloses a method for connecting a plurality of data centers in a non-hierarchical network using a two-cut-not-out network configuration. The method includes identifying, by the network node, a plurality of physical connection paths among the plurality of network nodes associated with the plurality of data centers. Further, the method includes selecting, by the network node, at least three paths based on a predefined path selection criteria. Further, the method includes connecting, by the network node, the plurality of network nodes associated with the plurality of data centers using the at least three paths. Further, the method includes creating, by the network node, the non-hierarchical network among the plurality of network nodes associated with the plurality of data centers using the at least three paths.
[0010] The three paths are physical paths. The three paths are a main line, a restoration line and a stand by line, wherein the main line primarily carries data, the restoration line takes over the main line during a main line disruption and the stand by line controls a network load when the main line and the restoration line are disrupted.
[0011] The two-cut-not-out network configuration ensures greater than 99.99% network uptime in the non-hierarchical network.
[0012] The predefined path selection criterion comprises at least one of a shortest physical path, an end to end distinct physical path, a zero overlapping path, and a no crisscross path.
[0013] Further, the method includes validating feasibility of the plurality of physical connection paths, wherein feasibility of the plurality of physical connection paths is validated by preparing a data map of a physical infrastructure between the plurality of network nodes associated with the plurality of data centers based on at least of a LiDAR, a camera and a GPR, assigning, to each physical connection path, at least one of a deployment feasibility coefficient, an operation feasibility coefficient and a management feasibility coefficient, computing cumulative feasibility score of each physical connection path based on the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient, and selecting the at least three paths based on the cumulative feasibility score of each path.
[0014] The deployment feasibility coefficient is determined by determining available physical path deployment clearance by combining an area available for a physical path deployment obtained from at least one of the LIDAR, the camera and the GPR, comparing the obtained physical path deployment area with a standard physical path deployment clearance required for the physical path deployment and modifying a difference between the obtained physical path deployment area and the standard physical path deployment area to determine deployment feasibility coefficient.
[0015] The operation feasibility coefficient is determined by determining available operation clearance by combining an area available for operation obtained from at least one of the LIDAR, the camera and the GPR, comparing the combined area available for operation with standard clearance required for operation to be performed on the physical path, and modifying a difference between the obtained area available for operation and the standard operation clearance to determine operation feasibility coefficient.
[0016] The management feasibility coefficient is determined by determining an available maintenance clearance by combining an area available for maintenance obtained from at least one of the LIDAR, the camera and the GPR, comparing the obtained area for maintenance with standard clearance required for maintenance to be performed on a physical path, and modifying the difference between obtained area available for maintenance and standard maintenance clearance to determine maintenance feasibility coefficient.
[0017] A feasibility coefficient for each of the physical connection path is obtained by combining the deployment feasibility coefficient, the operation feasibility coefficient and the maintenance feasibility coefficient. The feasibility coefficient is used to determine a feasible deployment path.
[0018] Accordingly, the present invention herein discloses a non-hierarchical network for connecting a plurality of data centers using a two-cut-not-out network configuration. The non-hierarchical network includes a plurality of network nodes connecting directly to other network nodes and cooperate with one another to efficiently route data. Each network node comprises a processor, a memory and a communication path determination controller. The communication path determination controller is configured to identify a plurality of physical connection paths among the plurality of network nodes associated with the plurality of data centers. Further, the communication path determination controller is configured to select at least three paths based on a predefined path selection criterion. Further, the communication path determination controller is configured to connect the plurality of network nodes associated with the plurality of data centers using the at least three paths. Further, the communication path determination controller is configured to create the non-hierarchical network among the plurality of network nodes associated with the plurality of data centers using the at least three paths.
[0019] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF FIGURES
[0020] The method and system are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[0021] FIG. 1a is an overview of a non-hierarchical network for connecting a plurality of data centers;
[0022] FIG. 1b illustrates a block diagram of a data center;
[0023] FIG. 1c is an example illustration of a logical network; and
[0024] FIG. 2 is a flow chart illustrating a method for determining an optimal path arrangement for the data center.
DETAILED DESCRIPTION OF INVENTION
[0025] In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a thorough understanding of the embodiment of invention. However, it will be obvious to a person skilled in the art that the embodiments of the invention may be practiced with or without these specific details. In other instances, well known methods, procedures and components have not been described in details so as not to unnecessarily obscure aspects of the embodiments of the invention.
[0026] Furthermore, it will be clear that the invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions and equivalents will be apparent to those skilled in the art, without parting from the scope of the invention.
[0027] The methods and processes described herein may have fewer or additional steps or states and the steps or states may be performed in a different order. Not all steps or states need to be reached. The methods and processes described herein may be embodied in, and fully or partially automated via, software code modules executed by one or more general purpose computers. The code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all of the methods may alternatively be embodied in whole or in part in specialized computer hardware.
[0028] The results of the disclosed methods may be stored in any type of computer data repository, such as relational databases and flat file systems that use volatile and/or non-volatile memory (e.g., magnetic disk storage, optical storage, EEPROM and/or solid state RAM).
[0029] The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
[0030] Moreover, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor device, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor device can be a microprocessor, but in the alternative, the processor device can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor device can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor device includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor device can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor device may also include primarily analog components. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
[0031] The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor device, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An exemplary storage medium can be coupled to the processor device such that the processor device can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor device. The processor device and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor device and the storage medium can reside as discrete components in a user terminal.
[0032] Conditional language used herein, such as, among others, "can," "may," "might," "may," “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
[0033] Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[0034] The accompanying drawings are used to help easily understand various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
[0035] Accordingly, the present invention herein achieves a method for connecting a plurality of data centers in a non-hierarchical network using a two-cut-not-out network configuration. The method includes identifying, by a network node, a plurality of physical connection paths among a plurality of network nodes associated with the plurality of data centers. Further, the method includes selecting, by the network node, at least three paths based on a predefined path selection criterion. Further, the method includes connecting, by the network node, the plurality of network nodes associated with the plurality of data centers using the at least three paths. Further, the method includes creating, by the network node, the non-hierarchical network among the plurality of network nodes associated with the plurality of data centers using the at least three paths.
[0036] Unlike conventional methods and system, the proposed method can be used to determine the optimal path arrangement for the data center with low latency, better bandwidth usage, unconstraint capacity and enhanced network lifetime
[0037] Referring now to the drawings, and more particularly to FIGS. 1a through 2, there are shown preferred embodiments.
[0038] FIG. 1a is an overview of a non-hierarchical network (1000) for connecting a plurality of data centers (200a-200n). The non-hierarchical network (1000) includes a plurality of network nodes (100a-100n) and the plurality of data centers (200a-200n). The plurality of network nodes (100a-100n) may be a server. The plurality of network nodes (100a-100n) are connected directly to other network nodes and cooperate with one another to efficiently route data.
[0039] A network node (100) from the plurality of network nodes (100a-100n) is configured to identify a plurality of physical connection paths among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) and select at least three paths based on a predefined path selection criterion.
[0040] The three paths are physical paths. The three paths can be a main line, a restoration line and a stand by line, where the main line primarily carries data, a restoration line takes over the main line during a main line disruption and the stand by line controls a network load when the main line and the restoration line are disrupted. The predefined path selection criterion can be, for example, but not limited to, a shortest physical path, an end to end distinct physical path, a zero overlapping path, and a no crisscross path.
[0041] By using the three paths, the network node (100) is configured to connect the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n). After connecting the plurality of network nodes, the network node (100) is configured to create the non-hierarchical network (1000) among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) using the three paths.
[0042] Further, the network node (100) is configured to validate feasibility of the plurality of physical connection paths. The feasibility of the plurality of physical connection paths is validated by preparing a data map of a physical infrastructure between the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) based on a Light Detection and Ranging (LIDAR), a camera and a ground penetrating radar (GPR), assigning a deployment feasibility coefficient, an operation feasibility coefficient and a management feasibility coefficient to each physical connection path, determining cumulative feasibility score of each physical connection path based on the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient, and selecting the three paths based on the cumulative feasibility score of each path.
[0043] The deployment feasibility coefficient is determined by determining available physical path deployment clearance by combining an area available for the physical path deployment obtained from the LIDAR, the camera and the GPR, comparing the obtained physical path deployment area with a standard physical path deployment clearance required for the physical path deployment, and modifying a difference between the obtained physical path deployment area and the standard physical path deployment area to determine deployment feasibility coefficient.
[0044] Further, the operation feasibility coefficient is determined by determining available operation clearance by combining an area available for operation obtained from the LIDAR, the camera and the GPR, comparing the combined area available for operation with standard clearance required for operation to be performed on the physical path, and modifying a difference between the obtained area available for operation and the standard operation clearance to determine operation feasibility coefficient.
[0045] The management feasibility coefficient is determined by determining an available maintenance clearance by combining the area available for maintenance obtained from the LIDAR, the camera and the GPR, comparing the obtained area for maintenance with standard clearance required for maintenance to be performed on the physical path, and modifying the difference between obtained area available for maintenance and standard maintenance clearance to determine maintenance feasibility coefficient.
[0046] Further, a feasibility coefficient for each of the physical connection path is obtained by combining the deployment feasibility coefficient, the operation feasibility coefficient and the maintenance feasibility coefficient. The feasibility coefficient is used to determine the feasible deployment path.
[0047] FIG. 1b illustrates a block diagram of the network node (100). The network node (100) includes a processor (110), a memory (120), a communication path determination controller (130), and a communicator (140). The processor (110) is coupled with the memory (120), the communication path determination controller (130), and the communicator (140).
[0048] The communication path determination controller (130) is configured to identify the plurality of physical connection paths among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) and select the three paths based on the predefined path selection criterion.
[0049] Using the three paths, the communication path determination controller (130) is configured to connect the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n). After connecting the plurality of network nodes (100a-100n), the communication path determination controller (130) is configured to create the non-hierarchical network (1000) among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) using the three paths. Further, the communication path determination controller (130) is configured to validate feasibility of the plurality of physical connection paths. Further, the processor (110) is configured to execute instructions stored in the memory (120) and to perform various processes. The communicator (140) is configured for communicating internally between internal hardware components and with external devices via one or more networks or the data center (200a-200n).
[0050] FIG. 1c is an example scenario in which a logical network is depicted. The logical network depicts 8 sites (A-H), 14 links, 265 route kilometer, and minimum 3 dedicated fiber paths for each Point of Presence (POP) irrespective of geographical separation.
[0051] The three dedicated paths between data centers (200a-200n) are the main line (path), the restoration line (path) and the standby line (path). The main line or path is used as a primary data carrier. The restoration line or path is used for taking over the main line during main line disruption. The standby line or path takes the network load when main line and restoration line are disrupted. The three paths combine to form a network that satisfies the 2-CNO (two cut not out) configuration of the network. The 2-CNO configuration ensures that the network is up 99.99% of time with restoration link always up even if main and protection path are down. The 2-CNO configuration is used for creating the non-hierarchical network (1000) between the plurality of network nodes (100a-100n) and the plurality of data centers (200a-200n), to efficiently route data and get maximum uptime of the network.
[0052] The non-hierarchical network (1000) is configured to identify a path selection criterion during a physical path selection of at least two data centers from the plurality of data centers (200a-200n). In general, the plurality of data centers (200a-200n) comprises a plurality of servers arranged in the non-hierarchical network (1000). The plurality of servers is also known as server farms. The plurality of data centers (200a-200n) with the server farms are essential to the functioning of information handling systems in different applications and industrial sectors. The plurality of data center (200a-200n) commonly come in various structures or architectures. The plurality of data centers (200a-200n) are commonly set up in multi-tier architectures. In the plurality of data centers (200a-200n), nodes or servers are arranged in various topologies. Data integrity and data processing speed are essential requirements for today's applications. Therefore, it is becoming increasingly necessary to be able to detect data congestion in the plurality of data centers (200a-200n) and select data paths or information paths through the network (1000) to increase the speed of processing a request.
[0053] The path selection criterion is determined based the shortest physical path, the end to end distinct physical path, the zero overlapping path, and the crisscross path. Further, the non-hierarchical network (1000) is configured to connect the at least two data centers from the plurality of data centers (200a-200n) with each other using minimum three distinct paths. The minimum three distinct paths are originated at the plurality of data centers (200a-200n) or terminated at the plurality of data centers (200a-200n).
[0054] The at least two data centers from the plurality of data centers (200a-200n) are connected with each other by creating a map using one or more LIDAR, the camera and GPR data, selecting roads on the map based on feasibility of deploying an underground link using the captured data from different sensors, selecting a location of the at least two data centers from the plurality of data centers (200a-200n) on the map, and connecting the at least two data centers (200a-200n) with each other using the selected location.
[0055] Alternatively, the at least two data centers from the plurality of data centers (200a-200n) are connected with each other by determining the roads between the at least two data centers (200a-200n) using the LIDAR, determining the roads between the at least two data centers (200a-200n) using the camera, determining the roads between the at least two data centers using a ground penetrating radar, obtaining a data on a map by superimposing the determined roads between the at least two data centers (200a-200n) using the LIDAR, the determined roads between the at least two data centers (200a-200n) using the camera, and the determined roads between the at least two data centers using the ground penetrating radar, determining a feasibility of deployment of network and operations and management using the obtained data, and connecting the at least two data centers (200a-200n) with each other based on the determined feasibility of deployment of network and operations and management.
[0056] The feasibility of deployment of the network (1000), operations and management are determined by determining a route selection procedure, identifying at least one route for connecting the at least two data centers (200a-200n) using the determined route selection procedure, and determining the feasibility of deployment of network and operations and management based on the one of the identified routes. For each path, a deployment feasibility coefficient is determined. The cumulative feasibility score is determined based on the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient. The deployment feasibility coefficient, operation feasibility coefficient and management feasibility coefficient are determined by determining available physical paths by a plurality of sensors and comparing the obtained data with standard physical clearances required for deployment, operation and management and then scaling down the difference between two quantities. The three coefficients are combined to find the feasibility score and hence the feasible deployment path.
[0057] Further, the non-hierarchical network (1000) is configured to create a mesh type network between the at least two connected data centers (200a-200n).
[0058] According to an exemplary embodiment, as shown in FIG. 1c, a logical network is determined between 8 data centers location. Every path/road between the datacenters is surveyed using 360 degree camera, LIDAR and ground penetrating radar. All the surveyed data are superimposed on a map. The routes are then checked for feasibility for underground fiber deployment. Presence of nearby structures near the deployment path, distance of nearby structures from the deployment path and presence of underground structures in the deployment path are used to check the feasibility of route. The mentioned data obtained from the three surveying systems is checked against predetermined data to check for feasibility. The predetermined data may include, but not limited to, distance of nearby structure from path of deployment path, depth of underground structure from deployment path among other data. After obtaining the feasible path, three or more paths originating or ending paths are chosen for the data center based on predetermined logics, which include but not limited to, distinct path with zero overlap or crisscross, shortest path and future digging and development work. Future, digging and development work information is obtained from municipal and highway authorities and the information is mapped on the already created map to modify the feasibility of network deployment. The paths for every data centers (200a-200n) to be connected in the network (1000) are determined using the above mentioned logic. Hence, a mesh network is obtained between the two data centers (200a-200n) that ensure that the network uptime is 99.99%.
[0059] FIG. 2 is a flow chart (200) illustrating a method for determining the optimal path arrangement for the data center (200a-200n), according to a present invention. The operations (S202-S210) are performed by the communication path determination controller (130). At S202, the method includes identifying the plurality of physical connection paths among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n). At S204, the method includes selecting the three paths based on the predefined path selection criteria. At S206, the method includes connecting the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) using the three paths. At S208, the method includes creating the non-hierarchical network (1000) among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) using the three paths. At S210, the method includes validating feasibility of the plurality of physical connection paths.
[0060] The embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
[0061] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
Claims:CLAIMS
We claim:
1. A method for connecting a plurality of data centers (200a-200n) in a non-hierarchical network (1000) using a two-cut-not-out network configuration, wherein the non-hierarchical network (1000) has a plurality of network nodes (100a-100n) that connects directly to other network nodes and cooperate with one another to efficiently route data, the method comprising:
identifying, by a network node (100) from the plurality of network nodes (100a-100n), a plurality of physical connection paths among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n);
selecting, by the network node (100), at least three paths based on a predefined path selection criterion;
connecting, by the network node (100), the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) using the at least three paths; and
creating, by the network node (100), the non-hierarchical network (1000) among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) using the at least three paths.
2. The method as claimed in claim 1, wherein the at least three paths are physical paths.
3. The method as claimed in claim 1, wherein the two-cut-not-out network configuration ensures greater than 99.99% network uptime in the non-hierarchical network (1000).
4. The method as claimed in claim 1, wherein the at least three paths are a main line, a restoration line and a stand by line, wherein the main line primarily carries data, the restoration line takes over the main line during a main line disruption and the stand by line controls a network load when the main line and the restoration line are disrupted.
5. The method as claimed in claim 1, wherein the predefined path selection criterion comprises at least one of a shortest physical path, an end to end distinct physical path, a zero overlapping path, and a no crisscross path.
6. The method as claimed in claim 1, further comprising validating feasibility of the plurality of physical connection paths, wherein feasibility of the plurality of physical connection paths is validated by:
preparing a data map of a physical infrastructure between the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) based on at least of Light Detection and Ranging (LiDAR), a camera and a ground penetrating radar (GPR);
assigning, to each physical connection path, at least one of a deployment feasibility coefficient, an operation feasibility coefficient and a management feasibility coefficient;
determining a cumulative feasibility score of each physical connection path based on the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient; and selecting the at least three paths based on the cumulative feasibility score of each path.
7. The method as claimed in claim 6, wherein the deployment feasibility coefficient is determined by:
determining available physical path deployment clearance by combining an area available for a physical path deployment obtained from at least one of the LIDAR, the camera and the GPR;
comparing the obtained physical path deployment area with a standard physical path deployment clearance required for the physical path deployment; and
modifying a difference between the obtained physical path deployment area and the standard physical path deployment area to determine deployment feasibility coefficient.
8. The method as claimed in claim 6, wherein the operation feasibility coefficient is determined by:
determining available operation clearance by combining an area available for operation obtained from at least one of the LIDAR, the camera and the GPR;
comparing the combined area available for operation with standard clearance required for operation to be performed on the physical path; and
modifying a difference between the obtained area available for operation and the standard operation clearance to determine operation feasibility coefficient.
9. The method as claimed in claim 6, wherein the management feasibility coefficient is determined by:
determining an available maintenance clearance by combining an area available for maintenance obtained from at least one of the LIDAR, the camera and the GPR;
comparing the obtained area for maintenance with standard clearance required for maintenance to be performed on a physical path; and
modifying the difference between obtained area available for maintenance and standard maintenance clearance to determine maintenance feasibility coefficient.
10. The method as claimed in claim 6, wherein a feasibility coefficient for each of the physical connection path is obtained by combining the deployment feasibility coefficient, the operation feasibility coefficient and the maintenance feasibility coefficient, wherein the feasibility coefficient is used to determine a feasible deployment path.
| # | Name | Date |
|---|---|---|
| 1 | 202011039200-FORM 18 [03-09-2024(online)].pdf | 2024-09-03 |
| 1 | 202011039200-STATEMENT OF UNDERTAKING (FORM 3) [10-09-2020(online)].pdf | 2020-09-10 |
| 2 | 202011039200-POWER OF AUTHORITY [10-09-2020(online)].pdf | 2020-09-10 |
| 2 | 202011039200-Proof of Right [12-01-2021(online)].pdf | 2021-01-12 |
| 3 | 202011039200-COMPLETE SPECIFICATION [10-09-2020(online)].pdf | 2020-09-10 |
| 3 | 202011039200-FORM 1 [10-09-2020(online)].pdf | 2020-09-10 |
| 4 | 202011039200-DECLARATION OF INVENTORSHIP (FORM 5) [10-09-2020(online)].pdf | 2020-09-10 |
| 4 | 202011039200-DRAWINGS [10-09-2020(online)].pdf | 2020-09-10 |
| 5 | 202011039200-DECLARATION OF INVENTORSHIP (FORM 5) [10-09-2020(online)].pdf | 2020-09-10 |
| 5 | 202011039200-DRAWINGS [10-09-2020(online)].pdf | 2020-09-10 |
| 6 | 202011039200-COMPLETE SPECIFICATION [10-09-2020(online)].pdf | 2020-09-10 |
| 6 | 202011039200-FORM 1 [10-09-2020(online)].pdf | 2020-09-10 |
| 7 | 202011039200-POWER OF AUTHORITY [10-09-2020(online)].pdf | 2020-09-10 |
| 7 | 202011039200-Proof of Right [12-01-2021(online)].pdf | 2021-01-12 |
| 8 | 202011039200-FORM 18 [03-09-2024(online)].pdf | 2024-09-03 |
| 8 | 202011039200-STATEMENT OF UNDERTAKING (FORM 3) [10-09-2020(online)].pdf | 2020-09-10 |
| 9 | 202011039200-FER.pdf | 2025-10-03 |
| 1 | 202011039200_SearchStrategyNew_E_SearchHistory(1)E_25-09-2025.pdf |