Abstract: The present invention discloses a method and system for connecting a plurality of groups of data centers (3000a, 3000b, ….3000n), in a two-cut-not-out and a non-hierarchical network (3000). A plurality of data centers (200a-200n) in the plurality of groups of data centers is connected non-hierarchically and the non-hierarchical network connects the plurality of groups of data centers in high uptime and low latency network configuration. The method includes identifying a plurality of paths among a plurality of network nodes (100a-100n) associated with the plurality of groups of data centers, selecting at least three paths using path selection criterion, identifying a location of plurality of amplifiers for data loss compensation in the at least three paths using amplifiers location identifier criterion, connecting the plurality of network nodes using the at least three paths and creating the non-hierarchical network among the plurality of network nodes using the at least three paths.
[0001] The present disclosure relates to a method and system for optimal communication path arrangement for interconnecting a plurality of groups of data centers. The present application, is a PATENT OF ADDITION, based on and claims priority from an Indian Application Number 202011039200 filed on September 10, 2020, by Sterlite Technologies Limited, entitled “METHOD AND SYSTEM FOR DETERMINING OPTIMAL COMMUNICATION PATH ARRANGEMENT FOR DATA CENTER”, the disclosure of which is incorporated by reference herein in its entirety.
BACKGROUND OF INVENTION
[0002] Currently, all data center (DC) operators are using their existing telecommunications (Telco)/Internet service provider (ISP) network bandwidth/fiber core across city for interconnectivity of other DC/cable landing station. Telco networks are designed as per mobility requirements. The Telco networks run mostly in a ring topology and have network isolation issue in case of dual cut in the network. Other than number of midspan in the Telco networks and regular planned outage, the Telco networks face frequent fiber cut on regular basis. Over a period of time, most of the sections of the Telco networks are converted into aerial from an underground (UG) due to road and utility expansion activity in the city area.
[0003] Even though, Telcos assure exclusive network and links. In many cases, on ground, lot of elements such as Fiber Distribution Management System (FDMS), Dense Wavelength Division Multiplexing (DWDM) Sub rack, Optical Transport Network (OTN) Controller shelf, Multiprotocol Label Switching (MPLS) sub rack are shared, that carry a risk of outages due to hardware failure and human error. Immediate bandwidth provision is always a constraint for Telco as same network resource is shared by both mobility, small, medium and big enterprise connectivity. There is always an internal preference with respect to escalation, network criticality and pricing.
[0004] To solve the aforesaid issues, various network planning methods for the datacenter are available. In an example, network planning methods are achieved using survey data, such as, Light Detection and Ranging (LiDAR), 360 degree camera and ground penetrating radar (GPR) data. Further, during the network planning methods, determining an optimal path between two geographical locations based on different models such as cost model, repair rate model etc., is also available. However, the conventional models cannot ensure 99.99% uptime of the network, which is an essential component for datacenter design. Also, the conventional models fail to provide a low latency network configuration.
[0005] Thus, it is desired to address the above mentioned disadvantages or other shortcomings or at least provide a useful alternative.
OBJECTIVE OF INVENTION
[0006] The principal objective of the present invention is to provide a method and system for providing a network planning technique for interconnecting a plurality of groups of data centers by using minimum three paths and creating a non-hierarchical network associated with the plurality of groups of data centers to ensure better uptime of the non-hierarchical network in a low latency network configuration.
[0007] Another objective of the present invention is to choose a path selection criterion (e.g., shortest physical path, end to end distinct physical path, a zero overlapping path or a crisscross path during physical path selection).
[0008] Another objective of the present invention is to select three paths based on the path selection criterion and supplement the path selection criterion with an amplifier location setting criterion.
[0009] Another objective of the present invention is to select a path at which chances of future digging or development work by other authorities is also less or minimal.
SUMMARY
[0010] Accordingly, the present invention herein discloses a method and a system for connecting a plurality of groups of data centers, placed at a long distance from each other, in a two-cut-not-out and a non-hierarchical network, wherein a plurality of data centers in the plurality of groups of data centers is connected non-hierarchically and the non-hierarchical network connects the plurality of groups of data centers, in a high uptime and a low latency network configuration. The method includes identifying, by a processing unit, a plurality of paths among a plurality of network nodes associated with the plurality of groups of data centers and selecting, by the processing unit, at least three paths using a predefined path selection criterion. Further, the method includes identifying, by a sub-processing unit, a location of a plurality of amplifiers for data loss compensation in the at least three paths using a predefined amplifiers location identifier criterion. Furthermore, the method includes connecting, by the sub-processing unit, the plurality of network nodes associated with the plurality of groups of data centers using the at least three paths, wherein each path from the at least three paths comprises the plurality of amplifiers that are periodically placed and creating, by the sub-processing unit, the non-hierarchical network among the plurality of network nodes associated with the plurality of groups of data centers using the at least three paths.
[0011] The at least three paths used for connecting the plurality of groups of non-hierarchically connected data centers are physical paths. The plurality of amplifiers used for enabling signal amplification in the non-hierarchical network is laser amplifiers such as solid state amplifiers and doped fiber amplifiers. The plurality of amplifiers used for the signal amplification is a doped fiber in a line amplifier, such as erbium doped fiber amplifier.
[0012] The at least three paths are a main line, a restoration line and a stand by line, wherein the main line primarily carries data, the restoration line takes over the main line during a main line disruption and the stand by line controls a network load when the main line and the restoration line are disrupted.
[0013] The predefined path selection criterion comprises at least one of a shortest physical path, an end to end distinct physical path, a zero overlapping path, and a no crisscross path.
[0014] The method further includes validating, by the processing unit, feasibility of the plurality of paths by preparing a data map of a physical infrastructure between the plurality of network nodes associated with the plurality of groups of data centers using soil strata data, population density data, weather data, electricity availability data, topographical data and road network data; assigning, to each path, at least one of a deployment feasibility coefficient, an operation feasibility coefficient and a management feasibility coefficient; determining a cumulative feasibility score of each path based on the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient and selecting the at least three paths based on the cumulative feasibility score of each path.
[0015] The deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient are determined by assigning a value score for each of the soil strata data, the population density data, the weather data, the electricity availability data and the topographical data to each of road networks between the plurality of groups of data centers that satisfy the predefined path selection criterion and combining feasibility score for each path to determine an overall feasibility coefficient of the path, wherein the value score for each process is assigned to each road network for each type of data, wherein each type of process is deployment, operations and management and each type of data is the soil strata data, the population density data, the weather data, the electricity availability data and the topographical data and wherein for each type of data and for each type of process, the value score is scaled with numeric values, wherein a lowest numeric value denotes a low feasibility and a high numeric value denotes a high feasibility.
[0016] The location of the plurality of amplifiers is repeatedly determined throughout the at least three paths after at least 50 kms and at most 70 kms from a previous amplifier location or a network node.
[0017] The location of the plurality of amplifiers in the at least three paths is determined by using the population density data, the topographical data, the electricity availability data and the road network data, wherein the feasibility score for deployment, operation and management of an amplifier from the plurality of amplifiers at a specific location in the at least three paths is determined by assigning a value score to each type of data and adding the value score of each type of data. A weighted value of the feasibility score for deployment, operation and management of the amplifier is added to the deployment feasibility coefficient to validate the feasibility of the plurality of paths.
[0018] For low latency and signal worthiness, the long distance using the non-hierarchical network is set between 70 kms to 1100 kms.
[0019] The system comprises an input unit configured to take input regarding a location of a plurality of network nodes associated with the plurality of groups of data centers and send the input to a processing unit, the processing unit configured to identify a plurality of paths between the plurality of network nodes using a road map between the plurality of network nodes, wherein out of the plurality of identified paths, the processing unit selects at least three paths using a predefined path selection criterion; and a sub-processing unit configured to receive processed data from the processing unit to identify a location of a plurality of amplifiers on the at least three paths and create the non-hierarchical network among the plurality of network nodes associated with the plurality of groups of data centers using the at least three paths, wherein each path comprises the plurality of amplifiers that are periodically placed.
[0020] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF FIGURES
[0021] The method and system are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[0022] FIG. 1a is an overview of a non-hierarchical network for connecting a plurality of data centers.
[0023] FIG. 1b illustrates a block diagram of a data center.
[0024] FIG. 1c is an example illustration of a logical network.
[0025] FIG. 2 is a flow chart illustrating a method for determining an optimal path arrangement for the data center.
[0026] FIG. 3a is an overview of a non-hierarchical network for interconnecting a plurality of groups of data centers.
[0027] FIG. 3b is an example illustration of intracity and intercity data center connectivity in accordance with the present invention.
[0028] FIG. 4 illustrates a block diagram of a system for interconnecting the plurality of groups of data centers.
[0029] FIG. 5 is an example illustration of a logical network interconnecting the plurality of groups of data centers.
[0030] FIG. 6 is a flow chart illustrating a method for interconnecting the plurality of groups of data centers.
DETAILED DESCRIPTION OF INVENTION
[0031] In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a thorough understanding of the embodiment of invention. However, it will be obvious to a person skilled in the art that the embodiments of the invention may be practiced with or without these specific details. In other instances, well known methods, procedures and components have not been described in details so as not to unnecessarily obscure aspects of the embodiments of the invention.
[0032] Furthermore, it will be clear that the invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions and equivalents will be apparent to those skilled in the art, without parting from the scope of the invention.
[0033] The methods and processes described herein may have fewer or additional steps or states and the steps or states may be performed in a different order. Not all steps or states need to be reached. The methods and processes described herein may be embodied in, and fully or partially automated via, software code modules executed by one or more general purpose computers. The code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all of the methods may alternatively be embodied in whole or in part in specialized computer hardware.
[0034] The results of the disclosed methods may be stored in any type of computer data repository, such as relational databases and flat file systems that use volatile and/or non-volatile memory (e.g., magnetic disk storage, optical storage, EEPROM and/or solid state RAM).
[0035] The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
[0036] Moreover, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor device, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor device can be a microprocessor, but in the alternative, the processor device can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor device can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor device includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor device can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor device may also include primarily analog components. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
[0037] The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor device, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An exemplary storage medium can be coupled to the processor device such that the processor device can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor device. The processor device and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor device and the storage medium can reside as discrete components in a user terminal.
[0038] Conditional language used herein, such as, among others, "can," "may," "might," "may," “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
[0039] Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[0040] The accompanying drawings are used to help easily understand various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
[0041] Accordingly, the present invention herein achieves a method for connecting a plurality of data centers in a non-hierarchical network using a two-cut-not-out network configuration. The method includes identifying, by a network node, a plurality of physical connection paths among a plurality of network nodes associated with the plurality of data centers. Further, the method includes selecting, by the network node, at least three paths based on a predefined path selection criterion. Further, the method includes connecting, by the network node, the plurality of network nodes associated with the plurality of data centers using the at least three paths. Further, the method includes creating, by the network node, the non-hierarchical network among the plurality of network nodes associated with the plurality of data centers using the at least three paths.
[0042] Unlike conventional methods and system, the proposed method can be used to determine the optimal path arrangement for the data center with low latency, better bandwidth usage, unconstraint capacity and enhanced network lifetime
[0043] Referring now to the drawings, and more particularly to FIGS. 1a through 6, there are shown preferred embodiments.
[0044] FIG. 1a is an overview of a non-hierarchical network (1000) for connecting a plurality of data centers (200a-200n). The non-hierarchical network (1000) includes a plurality of network nodes (100a-100n) and the plurality of data centers (200a-200n). The plurality of network nodes (100a-100n) may be a server. The plurality of network nodes (100a-100n) are connected directly to other network nodes and cooperate with one another to efficiently route data.
[0045] A network node (100) from the plurality of network nodes (100a-100n) is configured to identify a plurality of physical connection paths among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) and select at least three paths based on a predefined path selection criterion.
[0046] The three paths are physical paths. The three paths can be a main line, a restoration line and a stand by line, where the main line primarily carries data, a restoration line takes over the main line during a main line disruption and the stand by line controls a network load when the main line and the restoration line are disrupted. The predefined path selection criterion can be, for example, but not limited to, a shortest physical path, an end to end distinct physical path, a zero overlapping path, and a no crisscross path.
[0047] By using the three paths, the network node (100) is configured to connect the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n). After connecting the plurality of network nodes, the network node (100) is configured to create the non-hierarchical network (1000) among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) using the three paths.
[0048] Further, the network node (100) is configured to validate feasibility of the plurality of physical connection paths. The feasibility of the plurality of physical connection paths is validated by preparing a data map of a physical infrastructure between the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) based on a Light Detection and Ranging (LIDAR), a camera and a ground penetrating radar (GPR), assigning a deployment feasibility coefficient, an operation feasibility coefficient and a management feasibility coefficient to each physical connection path, determining cumulative feasibility score of each physical connection path based on the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient, and selecting the three paths based on the cumulative feasibility score of each path.
[0049] The deployment feasibility coefficient is determined by determining available physical path deployment clearance by combining an area available for the physical path deployment obtained from the LIDAR, the camera and the GPR, comparing the obtained physical path deployment area with a standard physical path deployment clearance required for the physical path deployment, and modifying a difference between the obtained physical path deployment area and the standard physical path deployment area to determine deployment feasibility coefficient.
[0050] Further, the operation feasibility coefficient is determined by determining available operation clearance by combining an area available for operation obtained from the LIDAR, the camera and the GPR, comparing the combined area available for operation with standard clearance required for operation to be performed on the physical path, and modifying a difference between the obtained area available for operation and the standard operation clearance to determine operation feasibility coefficient.
[0051] The management feasibility coefficient is determined by determining an available maintenance clearance by combining the area available for maintenance obtained from the LIDAR, the camera and the GPR, comparing the obtained area for maintenance with standard clearance required for maintenance to be performed on the physical path, and modifying the difference between obtained area available for maintenance and standard maintenance clearance to determine maintenance feasibility coefficient.
[0052] Further, a feasibility coefficient for each of the physical connection path is obtained by combining the deployment feasibility coefficient, the operation feasibility coefficient and the maintenance feasibility coefficient. The feasibility coefficient is used to determine the feasible deployment path.
[0053] FIG. 1b illustrates a block diagram of the network node (100). The network node (100) includes a processor (110), a memory (120), a communication path determination controller (130), and a communicator (140). The processor (110) is coupled with the memory (120), the communication path determination controller (130), and the communicator (140).
[0054] The communication path determination controller (130) is configured to identify the plurality of physical connection paths among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) and select the three paths based on the predefined path selection criterion.
[0055] Using the three paths, the communication path determination controller (130) is configured to connect the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n). After connecting the plurality of network nodes (100a-100n), the communication path determination controller (130) is configured to create the non-hierarchical network (1000) among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) using the three paths. Further, the communication path determination controller (130) is configured to validate feasibility of the plurality of physical connection paths. Further, the processor (110) is configured to execute instructions stored in the memory (120) and to perform various processes. The communicator (140) is configured for communicating internally between internal hardware components and with external devices via one or more networks or the data center (200a-200n).
[0056] FIG. 1c is an example scenario in which a logical network is depicted. The logical network depicts 8 sites (A-H), 14 links, 265 route kilometer, and minimum 3 dedicated fiber paths for each Point of Presence (POP) irrespective of geographical separation.
[0057] The three dedicated paths between data centers (200a-200n) are the main line (path), the restoration line (path) and the standby line (path). The main line or path is used as a primary data carrier. The restoration line or path is used for taking over the main line during main line disruption. The standby line or path takes the network load when main line and restoration line are disrupted. The three paths combine to form a network that satisfies the 2-CNO (two cut not out) configuration of the network. The 2-CNO configuration ensures that the network is up 99.99% of time with restoration link always up even if main and protection path are down. The 2-CNO configuration is used for creating the non-hierarchical network (1000) between the plurality of network nodes (100a-100n) and the plurality of data centers (200a-200n), to efficiently route data and get maximum uptime of the network.
[0058] The non-hierarchical network (1000) is configured to identify a path selection criterion during a physical path selection of at least two data centers from the plurality of data centers (200a-200n). In general, the plurality of data centers (200a-200n) comprises a plurality of servers arranged in the non-hierarchical network (1000). The plurality of servers is also known as server farms. The plurality of data centers (200a-200n) with the server farms are essential to the functioning of information handling systems in different applications and industrial sectors. The plurality of data center (200a-200n) commonly come in various structures or architectures. The plurality of data centers (200a-200n) are commonly set up in multi-tier architectures. In the plurality of data centers (200a-200n), nodes or servers are arranged in various topologies. Data integrity and data processing speed are essential requirements for today's applications. Therefore, it is becoming increasingly necessary to be able to detect data congestion in the plurality of data centers (200a-200n) and select data paths or information paths through the network (1000) to increase the speed of processing a request.
[0059] The path selection criterion is determined based the shortest physical path, the end to end distinct physical path, the zero overlapping path, and the crisscross path. Further, the non-hierarchical network (1000) is configured to connect the at least two data centers from the plurality of data centers (200a-200n) with each other using minimum three distinct paths. The minimum three distinct paths are originated at the plurality of data centers (200a-200n) or terminated at the plurality of data centers (200a-200n).
[0060] The at least two data centers from the plurality of data centers (200a-200n) are connected with each other by creating a map using one or more LIDAR, the camera and GPR data, selecting roads on the map based on feasibility of deploying an underground link using the captured data from different sensors, selecting a location of the at least two data centers from the plurality of data centers (200a-200n) on the map, and connecting the at least two data centers (200a-200n) with each other using the selected location.
[0061] Alternatively, the at least two data centers from the plurality of data centers (200a-200n) are connected with each other by determining the roads between the at least two data centers (200a-200n) using the LIDAR, determining the roads between the at least two data centers (200a-200n) using the camera, determining the roads between the at least two data centers using a ground penetrating radar, obtaining a data on a map by superimposing the determined roads between the at least two data centers (200a-200n) using the LIDAR, the determined roads between the at least two data centers (200a-200n) using the camera, and the determined roads between the at least two data centers using the ground penetrating radar, determining a feasibility of deployment of network and operations and management using the obtained data, and connecting the at least two data centers (200a-200n) with each other based on the determined feasibility of deployment of network and operations and management.
[0062] The feasibility of deployment of the network (1000), operations and management are determined by determining a route selection procedure, identifying at least one route for connecting the at least two data centers (200a-200n) using the determined route selection procedure, and determining the feasibility of deployment of network and operations and management based on the one of the identified routes. For each path, a deployment feasibility coefficient is determined. The cumulative feasibility score is determined based on the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient. The deployment feasibility coefficient, operation feasibility coefficient and management feasibility coefficient are determined by determining available physical paths by a plurality of sensors and comparing the obtained data with standard physical clearances required for deployment, operation and management and then scaling down the difference between two quantities. The three coefficients are combined to find the feasibility score and hence the feasible deployment path.
[0063] Further, the non-hierarchical network (1000) is configured to create a mesh type network between the at least two connected data centers (200a-200n).
[0064] According to an exemplary embodiment, as shown in FIG. 1c, a logical network is determined between 8 data centers location. Every path/road between the datacenters is surveyed using 360 degree camera, LIDAR and ground penetrating radar. All the surveyed data are superimposed on a map. The routes are then checked for feasibility for underground fiber deployment. Presence of nearby structures near the deployment path, distance of nearby structures from the deployment path and presence of underground structures in the deployment path are used to check the feasibility of route. The mentioned data obtained from the three surveying systems is checked against predetermined data to check for feasibility. The predetermined data may include, but not limited to, distance of nearby structure from path of deployment path, depth of underground structure from deployment path among other data. After obtaining the feasible path, three or more paths (originating or ending paths) are chosen for the data center based on predetermined logics, which include but not limited to, distinct path with zero overlap or crisscross, shortest path and future digging and development work. Future, digging and development work information is obtained from municipal and highway authorities and the information is mapped on the already created map to modify the feasibility of network deployment. The paths for every data centers (200a-200n) to be connected in the network (1000) are determined using the above mentioned logic. Hence, a mesh network is obtained between the two data centers (200a-200n) that ensure that the network uptime is 99.99%.
[0065] FIG. 2 is a flow chart (S200) illustrating a method for determining the optimal path arrangement for the data center (200a-200n), according to a present invention. The operations (S202-S210) are performed by the communication path determination controller (130). At S202, the method includes identifying the plurality of physical connection paths among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n). At S204, the method includes selecting the three paths based on the predefined path selection criteria. At S206, the method includes connecting the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) using the three paths. At S208, the method includes creating the non-hierarchical network (1000) among the plurality of network nodes (100a-100n) associated with the plurality of data centers (200a-200n) using the three paths. At S210, the method includes validating feasibility of the plurality of physical connection paths.
[0066] FIG. 3a is an overview of a non-hierarchical network (3000) for interconnecting a plurality of groups of data centers (Group A, Group B….Group N) (3000a, 3000b, ….3000n). The non-hierarchical network (3000) is a two-cut-not-out network configuration and includes the plurality of groups of data centers (3000a, 3000b, ….3000n). Each group of the plurality of groups of data centers (3000a, 3000b, ….3000n) includes the plurality of network nodes (100a-100n) and the plurality of data centers (200a-200n). The working and functionality of a single group is already explained in conjunction with FIGs. 1 to 2.
[0067] The plurality of data centers (200a-200n), thus the plurality of network nodes (100a-100n), in each group of the plurality of groups of data centers (3000a, 3000b, ….3000n) are connected in a non-hierarchical manner. The plurality of data centers (200a-200n) may be placed at a long distance. Further, the plurality of groups of data centers (3000a, 3000b, ….3000n) may be situated at a long distance. In an example, for low latency and signal worthiness, the long distance is set between 70 kms to 1100 kms, based on a specific channel type and corresponding modulation technique. The non-hierarchical network (3000) connects the plurality of groups of data centers (3000a, 3000b, ….3000n) (or the plurality of groups of non-hierarchically connected data centers or network nodes) in a high uptime and low latency network configuration.
[0068] The plurality of network nodes (100a-100n) are connected directly to other network nodes within the plurality of groups of data centers (3000a, 3000b, ….3000n) and cooperate with one another to efficiently route data. The plurality of network nodes (100a-100n) may be configured to identify a plurality of physical connection paths (or plurality of network paths or plurality of physical paths or plurality of paths) among the plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n) and select at least three paths (or at least three physical paths) based on the predefined path selection criterion.
[0069] The at least three paths may be used for connecting the plurality of groups of data centers (3000a, 3000b, ….3000n). The term “plurality of groups of data centers” may interchangeably be used with the term “plurality of groups of non-hierarchically connected data centers”. The at least three paths may be physical paths and feasible paths such as optical fiber cables. The at least three paths may be a main line, a restoration line and a stand by line, where the main line primarily carries data, a restoration line takes over the main line during a main line disruption and the stand by line controls a network load when the main line and the restoration line are disrupted. Thus, forms the two-cut-not-out network configuration. The predefined path selection criterion can be, for example, but not limited to, a shortest physical path, an end to end distinct physical path, a zero overlapping path, and a no crisscross path.
[0070] Once the at least three paths are selected, location(s) of a plurality of amplifiers are identified using a predefined amplifier location identifier criterion for data loss compensation in the at least three physical paths. The plurality of amplifiers may be periodically pre-installed (or placed) in each of the at least three paths. In an example, an amplifier location is repeatedly determined after at least 50 kms and at most 70 kms from a previous amplifier location or a network node. The location(s) of the plurality of amplifiers is determined by using a population density data, a topographical data, an electricity availability data and a road network data. The plurality of amplifiers is used for enabling signal amplification in the non-hierarchical network and may be laser amplifiers such as solid state amplifiers and doped fiber amplifiers. The laser amplifiers used for signal amplification are doped fibers in line amplifiers, such as erbium doped fiber amplifier.
[0071] After identification of the plurality of amplifiers, the plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n) are interconnected using the at least three paths and the non-hierarchical network (3000) among the plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n) is formed (created) using the at least three paths.
[0072] The plurality of network nodes (100a-100n) is further configured to validate feasibility of the plurality of physical connection paths (plurality of paths). The feasibility of the plurality of paths is validated by preparing a data map of a physical infrastructure between the plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n) using soil strata data, population density data, weather data, electricity availability data, topographical data, road network data or the like, assigning at least one of a deployment feasibility coefficient, an operation feasibility coefficient and a management feasibility coefficient to each path from the plurality of physical connection paths, determining a cumulative feasibility score of each path based on the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient, and selecting the at least three paths based on the cumulative feasibility score of each path.
[0073] The deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient are determined by assigning a value (or feasibility) score for each of the soil strata data, the population density data, the weather data, the electricity availability data and the topographical data to each of road networks between the plurality of groups of data centers (3000a, 3000b, ….3000n), that satisfy the path selection criterion and combining feasibility score of each physical connection path to determine an overall feasibility coefficient of the physical connection path. In an implementation, the value score for each process i.e. deployment, operations and management, is assigned to the each of the road networks, for each type of data. Each type of data may be the soil strata data, the population density data, the weather data, the electricity availability data and the topographical data, wherein for each type of data and for each type of process, the value score is scaled with numeric values. A low numeric value denotes a low feasibility and a high numeric value denotes a high feasibility.
[0074] Further, a feasibility score for deployment, operation and management of an amplifier from the plurality of amplifiers at a specific location in the feasible paths i.e. in the at least three paths is determined by assigning a value score to each type of data such as the population density data, the topographical data, the electricity availability data and the road network data and adding the value score of each type of data. The added value score with respect to the amplifier is a weighted value of an amplifier location feasibility, which is further added to the deployment feasibility coefficient to validate the feasibility of plurality of physical connection paths.
[0075] FIG. 3b is an example illustration of intracity and intercity data center connectivity (4000) in accordance with the present invention. FIG. 3b is explained in conjunction with FIG. 3a. An intercity data center connectivity (4000b) among cities A, B, C, D and E is represented, wherein all the cities are interconnected to each other directly or indirectly via handover nodes of respective cities. In other words, the plurality of groups of non-hierarchically connected data centers (network nodes) are interconnected with each other via the handover nodes. For example, an intracity network (4000a) of city B comprises the plurality of data centers (or network nodes) that are connected to each other non-hierarchically. Similarly, an intracity network (4000c) of city D comprises the plurality of data centers (or network nodes) that are connected to each other non-hierarchically. Other cities A, C and E may have corresponding intracity networks (not shown). For interconnecting the cities i.e., the plurality of groups of data centers, with each other, the handover nodes of the respective cities are identified that are nearby and feasible for interconnection. Accordingly, city A is interconnected to cities B, C and D located at an exemplary distance of 460 kilometers (kms), 380 kms and 510 kms from city A respectively. Further, city A is connected to city E via cities B, C and D. Similarly, city C is interconnected to cities B, D and E. Thus, a mesh network is formed among cities i.e., the plurality of groups of data centers and a requisite interconnection is established.
[0076] FIG. 4 illustrates a block diagram of a system (400) for interconnecting the plurality of groups of data centers. The system (400) is used for connecting the plurality of groups of non-hierarchically connected data centers (network nodes) or the plurality of groups of data centers (3000a, 3000b, ….3000n), wherein the plurality of data centers (200a-200n) may be placed at the long distance from each other. Further, the plurality of groups of data centers (3000a, 3000b, ….3000n) may be situated at the long distance from each other in the two-cut-not-out network configuration, wherein the two-cut-not-out network configuration is the non-hierarchical network that connects the plurality of groups of data centers (3000a, 3000b, ….3000n) in a high uptime and low latency network configuration. The system (400) may be configured into the plurality of network nodes (100a-100n) and includes an input unit (410), a memory (420), a processing unit (430) and a sub-processing unit (440). The system (400) is explained in conjunction with FIG. 3a. The memory (420) may be coupled with the input unit (410), the processing unit (430) and the sub-processing unit (440) to store information. The input unit (410) takes an input regarding the location of network nodes (the plurality of groups of data centers (3000a, 3000b, ….3000n)) and send it to the processing unit (430). The processing unit (430) identifies the plurality of physical connection paths (or network paths or paths) among the plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n) using a road map between the plurality of network nodes (100a-100n). Further, the processing unit (430) selects the at least three physical paths out of the plurality of physical connection paths using the predefined path selection criterion like at least one of the shortest physical path, the end to end distinct physical path, the zero overlapping path and the no crisscross path.
[0077] The at least three physical paths selected by the processing unit (430) based on the predefined path selection criteria that are the main line, the restoration line and the stand by line as described in conjunction with FIG. 3a. Additionally, the processing unit (430) validates feasibility of the plurality of physical connection paths. To validate the feasibility of the plurality of physical connection paths, the processing unit (430) prepares the data map of the physical infrastructure between the plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n) using the soil strata data, the population density data, the weather data, the electricity availability data, the topographical data, the road network data or the like. Further, the processing unit (430) assigns at least one of the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient to each physical connection path from the plurality of physical connection paths. Further, the processing unit (430) determines the cumulative feasibility score of each physical connection path based on the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient and selects the at least three paths based on the cumulative feasibility score of each path.
[0078] The processing unit (430) determines deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient as explained in FIG. 3a and repeatedly determines amplifier location throughout the at least three physical paths after at least 50 kms and at most 70 kms from the previous amplifier location or network node by using the population density data, the topographical data, the electricity availability data and the road network data, wherein the feasibility score for deployment, operation and management of the amplifier at the location in the feasible paths is determined by assigning value score to each type of data and adding the value score. The processing unit (430) adds weighted value of the amplifier location feasibility to deployment feasibility coefficient to validate the feasibility of plurality of physical connection paths.
[0079] The sub-processing unit (440) takes in the processed data from the processing unit (430) to identify the location of the plurality of amplifiers on the at least three physical paths and by using the at least three physical paths, the sub-processing unit (440) connects the plurality of network nodes (100a-100n) and creates the non-hierarchical network among the plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n). Each path out of the at least three physical paths comprises periodically placed plurality of amplifiers. Due to optimal placement of the plurality of amplifiers in the plurality of physical connection paths, the shortest and most feasible paths based on the path selection criterion is identified and thus the latency is reduced and 99.99% uptime is achieved.
[0080] FIG. 5 is an example illustration of a logical network (500) interconnecting the plurality of groups of data centers. The logical network depicts 17 city interconnectivity, 27 routes, 15889 route kilometers, and minimum 3 dedicated fiber paths for each Point of Presence (POP) irrespective of geographical separation. The three dedicated paths are the main line (path), the restoration line (path) and the standby line (path). At least two groups of data centers from the plurality of groups of data centers (3000a, 3000b, ….3000n) are connected with each other.
[0081] Every path/road between the plurality of groups of data centers (3000a, 3000b, ….3000n) may be surveyed using 360 degree camera, LIDAR and ground penetrating radar and are superimposed on a map. The routes are then checked for feasibility for fiber deployment based on soil strata data, the population density data, the weather data, the electricity availability data, the topographical data, the road network data or the like and by utilizing information from the plurality of amplifiers. Further, presence of nearby structures near the deployment path, distance of nearby structures from the deployment path and presence of underground structures in the deployment path may also be used to check the feasibility of route. The above-mentioned data obtained may be checked against predetermined data to check for feasibility. The predetermined data may include, but not limited to, distance of nearby structure from path of deployment path, depth of underground structure from deployment path among other data. After obtaining the feasible path, three or more paths (originating or ending paths) are chosen for the plurality of groups of data centers (3000a, 3000b, ….3000n) based on predetermined logics, which include but not limited to, distinct path with zero overlap or crisscross, shortest path and future digging and development work. Future, digging and development work information may be obtained from municipal and highway authorities and the information is mapped on the already created map to modify the feasibility of network deployment. The paths for every group of the plurality of groups of data centers (3000a, 3000b, ….3000n) to be connected in the network (3000) are determined using the above mentioned logic. Hence, a mesh network is formed between the two groups (3000a, 3000b, ….3000n) that ensure that the network uptime is 99.99%.
[0082] FIG. 6 is a flow chart (600) illustrating a method for interconnecting the plurality of groups of data centers. The operations (S602-S612) are performed by the system (400).
[0083] At S602, the method includes identifying the plurality of physical paths among the plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n) and at S604, selecting at least three paths (or at least three physical paths) based on the predefined path selection criterion.
[0084] At S606, the method includes identifying location(s) of the plurality of amplifiers for data loss compensation in the at least three physical paths using the predefined amplifier location identifier criterion.
[0085] At S608, the method includes connecting the plurality of network nodes associated with the plurality of groups of data centers using the at least three physical paths.
[0086] At S610, the method includes creating a non-hierarchical network among the plurality of network nodes associated with the plurality of groups of data centers using the at least three physical paths.
[0087] At S612, the method includes validating feasibility of the plurality of physical paths.
[0088] The various actions, acts, blocks, steps, or the like in the flow charts (S200) and (600) may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
[0089] The embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
[0090] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
Claims:CLAIMS
We claim:
1. A method for connecting a plurality of groups of data centers (3000a, 3000b, ….3000n), placed at a long distance from each other, in a two-cut-not-out and a non-hierarchical network (3000), wherein a plurality of data centers (200a-200n) in the plurality of groups of data centers (3000a, 3000b, ….3000n) is connected non-hierarchically and the non-hierarchical network (3000) connects the plurality of groups of data centers (3000a, 3000b, ….3000n), in a high uptime and a low latency network configuration, the method comprising:
identifying, by a processing unit (430), a plurality of paths among a plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n);
selecting, by the processing unit (430), at least three paths using a predefined path selection criterion;
identifying, by a sub-processing unit (440), a location of a plurality of amplifiers for data loss compensation in the at least three paths using a predefined amplifiers location identifier criterion;
connecting, by the sub-processing unit (440), the plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n) using the at least three paths, wherein each path from the at least three paths comprises the plurality of amplifiers that are periodically placed; and
creating, by the sub-processing unit (440), the non-hierarchical network (3000) among the plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n) using the at least three paths.
2. The method as claimed in claim 1, wherein the at least three paths used for connecting the plurality of groups of non-hierarchically connected data centers (3000a, 3000b, ….3000n) are physical paths.
3. The method as claimed in claim 1, wherein the plurality of amplifiers used for enabling signal amplification in the non-hierarchical network (3000) is laser amplifiers such as solid state amplifiers and doped fiber amplifiers.
4. The method as claimed in claim 1, wherein the plurality of amplifiers used for the signal amplification is a doped fiber in-line amplifier, such as erbium doped fiber amplifier.
5. The method as claimed in claim 1, wherein the at least three paths are a main line, a restoration line and a stand by line, wherein the main line primarily carries data, the restoration line takes over the main line during a main line disruption and the stand by line controls a network load when the main line and the restoration line are disrupted.
6. The method as claimed in claim 1, wherein the predefined path selection criterion comprises at least one of a shortest physical path, an end to end distinct physical path, a zero overlapping path, and a no crisscross path.
7. The method as claimed in claim 1 further comprising validating feasibility of the plurality of paths by the processing unit (430), wherein feasibility of the plurality of paths is validated by:
preparing a data map of a physical infrastructure between the plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n) using soil strata data, population density data, weather data, electricity availability data, topographical data and road network data;
assigning, to each path, at least one of a deployment feasibility coefficient, an operation feasibility coefficient and a management feasibility coefficient;
determining a cumulative feasibility score of each path based on the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient; and
selecting the at least three paths based on the cumulative feasibility score of each path.
8. The method as claimed in claim 7, wherein the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient are determined by:
assigning a value score for each of the soil strata data, the population density data, the weather data, the electricity availability data and the topographical data to each of road networks between the plurality of groups of data centers (3000a, 3000b, ….3000n) that satisfy the predefined path selection criterion; and
combining feasibility score for each path to determine an overall feasibility coefficient of the path, wherein the value score for each process is assigned to each road network for each type of data, wherein each type of process is deployment, operations and management and each type of data is the soil strata data, the population density data, the weather data, the electricity availability data and the topographical data and wherein for each type of data and for each type of process, the value score is scaled with numeric values, wherein a lowest numeric value denotes a low feasibility and a high numeric value denotes a high feasibility.
9. The method as claimed in claim 1, wherein the location of the plurality of amplifiers is repeatedly determined throughout the at least three paths after at least 50 kms and at most 70 kms from a previous amplifier location or a network node.
10. The method as claimed in claim 1, wherein the location of the plurality of amplifiers in the at least three paths is determined by using the population density data, the topographical data, the electricity availability data and the road network data, wherein the feasibility score for deployment, operation and management of an amplifier from the plurality of amplifiers at a specific location in the at least three paths is determined by assigning a value score to each type of data and adding the value score of each type of data.
11. The method as claimed in claim 1, wherein a weighted value of the feasibility score for deployment, operation and management of the amplifier is added to the deployment feasibility coefficient to validate the feasibility of the plurality of paths.
12. The method as claimed in claim 1, wherein for low latency and signal worthiness, the long distance using the non-hierarchical network (3000) is set between 70 kms to 1100 kms.
13. A system (400) for connecting a plurality of groups of data centers (3000a, 3000b, ….3000n), placed at a long distance from each other, in a two-cut-not-out and a non-hierarchical network (3000), wherein a plurality of data centers (200a-200n) in the plurality of groups of data centers (3000a, 3000b, ….3000n) is connected non-hierarchically and the non-hierarchical network (3000) connects the plurality of groups of data centers (3000a, 3000b, ….3000n), in a high uptime and a low latency network configuration, the system comprises:
an input unit (410) configured to take input regarding a location of a plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n) and send the input to a processing unit (430);
the processing unit (430) configured to identify a plurality of paths between the plurality of network nodes (100a-100n) using a road map between the plurality of network nodes (100a-100n), wherein out of the plurality of identified paths, the processing unit (430) selects at least three paths using a predefined path selection criterion; and
a sub-processing unit (440) configured to receive processed data from the processing unit (430) to identify a location of a plurality of amplifiers on the at least three paths and create the non-hierarchical network (3000) among the plurality of network nodes (100a-100n), associated with the plurality of groups of data centers (3000a, 3000b, ….3000n) using the at least three paths, wherein each path comprises the plurality of amplifiers that are periodically placed.
14. The system as claimed in claim 13, wherein the at least three paths used for connecting the plurality of groups of non-hierarchically connected data centers (3000a, 3000b, ….3000n) are physical paths.
15. The system as claimed in claim 13, wherein the plurality of amplifiers used for enabling signal amplification in the non-hierarchical network (3000) is laser amplifiers such as solid state amplifiers and doped fiber amplifiers.
16. The system as claimed in claim 13, wherein the plurality of amplifiers used for the signal amplification is a doped fiber in a line amplifier, such as erbium doped fiber amplifier.
17. The system as claimed in claim 13, wherein the at least three paths are a main line, a restoration line and a stand by line, wherein the main line primarily carries data, the restoration line takes over the main line during a main line disruption and the stand by line controls a network load when the main line and the restoration line are disrupted.
18. The system as claimed in claim 13, wherein the predefined path selection criterion used by the processing unit (430) for selecting the at least three paths comprises at least one of a shortest physical path, an end to end distinct physical path, a zero overlapping path, and a no crisscross path.
19. The system as claimed in claim 13, wherein the processing unit (430) validates feasibility of the plurality of paths and wherein the processing unit:
prepares a data map of a physical infrastructure between the plurality of network nodes (100a-100n) associated with the plurality of groups of data centers (3000a, 3000b, ….3000n) using soil strata data, population density data, weather data, electricity availability data, topographical data and road network data;
assigns, to each path, at least one of a deployment feasibility coefficient, an operation feasibility coefficient and a management feasibility coefficient;
determines a cumulative feasibility score of each path based on the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient; and
selects the at least three paths based on the cumulative feasibility score of each path.
20. The system as claimed in claim 19, wherein the processing unit (430) determines the deployment feasibility coefficient, the operation feasibility coefficient and the management feasibility coefficient by assigning a value score for each of the soil strata data, the population density data, the weather data, the electricity availability data and the topographical data to each of road networks between the plurality of groups of data centers (3000a, 3000b, ….3000n) that satisfy the predefined path selection criterion; and combining feasibility score for each path to determine an overall feasibility coefficient of the path, wherein the value score for each process is assigned to each road network for each type of data, wherein each type of process is deployment, operations and management and each type of data is the soil strata data, the population density data, the weather data, the electricity availability data and the topographical data and wherein for each type of data and for each type of process, the value score is scaled with numeric values, wherein a lowest numeric value denotes a low feasibility and a high numeric value denotes a high feasibility.
21. The system as claimed in claim 13, wherein the processing unit (430) repeatedly determines the location of the plurality of amplifiers throughout the at least three paths after at least 50 kms and at most 70 kms from a previous amplifier location or a network node.
22. The system as claimed in claim 13, wherein the processing unit (430) determines the location of the plurality of amplifiers in the at least three paths by using the population density data, the topographical data, the electricity availability data and the road network data, wherein the feasibility score for deployment, operation and management of an amplifier from the plurality of amplifiers at a specific location in the at least three paths is determined by assigning a value score to each type of data and adding the value score of each type of data.
23. The system as claimed in claim 13, wherein the processing unit (430) adds a weighted value of the feasibility score for deployment, operation and management of the amplifier to the deployment feasibility coefficient to validate the feasibility of the plurality of paths.
amplifier location feasibility to deployment feasibility coefficient to validate the feasibility of plurality of physical connection paths.
24. The system as claimed in claim 13, wherein for low latency and signal worthiness, the long distance using the non-hierarchical network (3000) is set between 70 kms to 1100 kms.
| # | Name | Date |
|---|---|---|
| 1 | 202013050565-STATEMENT OF UNDERTAKING (FORM 3) [20-11-2020(online)].pdf | 2020-11-20 |
| 2 | 202013050565-REQUEST FOR EXAMINATION (FORM-18) [20-11-2020(online)].pdf | 2020-11-20 |
| 3 | 202013050565-POWER OF AUTHORITY [20-11-2020(online)].pdf | 2020-11-20 |
| 4 | 202013050565-FORM 18 [20-11-2020(online)].pdf | 2020-11-20 |
| 5 | 202013050565-FORM 1 [20-11-2020(online)].pdf | 2020-11-20 |
| 6 | 202013050565-DRAWINGS [20-11-2020(online)].pdf | 2020-11-20 |
| 7 | 202013050565-DECLARATION OF INVENTORSHIP (FORM 5) [20-11-2020(online)].pdf | 2020-11-20 |
| 8 | 202013050565-COMPLETE SPECIFICATION [20-11-2020(online)].pdf | 2020-11-20 |
| 9 | 202013050565-RELEVANT DOCUMENTS [10-12-2020(online)].pdf | 2020-12-10 |
| 10 | 202013050565-FORM-26 [10-12-2020(online)].pdf | 2020-12-10 |
| 11 | 202013050565-FORM 3 [10-12-2020(online)].pdf | 2020-12-10 |
| 12 | 202013050565-FORM 13 [10-12-2020(online)].pdf | 2020-12-10 |
| 13 | 202013050565-ENDORSEMENT BY INVENTORS [10-12-2020(online)].pdf | 2020-12-10 |
| 14 | 202013050565-Proof of Right [12-01-2021(online)].pdf | 2021-01-12 |
| 15 | 202013050565-FER.pdf | 2023-03-01 |
| 16 | 202013050565-FORM 3 [01-09-2023(online)].pdf | 2023-09-01 |
| 17 | 202013050565-FER_SER_REPLY [01-09-2023(online)].pdf | 2023-09-01 |
| 1 | 2020130505651E_23-02-2023.pdf |