Abstract: The invention relates to an Optical qxq switch for fault tolerance routing of data communication comprising a plurality of nodes and a plurality of interconnects selectively coupling the nodes in the interconnect structure having qxq input/output links, in particular 4x4 optical data vortex switch, thereby increasing the bandwidth of the interconnection network in addition to the spreader for sending data between computers of the high performance parallel system.
Field of Invention;
The invention relates to interconnection network for distributing data across the network computers. More specifically, the present invention relates to qxq optical switch for fault tolerant routing of data communication and associated hardware implementation thereby minimizing the control and logic circuits.
Background of Invention;
High performance computing requires parallel processing of data by large number of computers. These computers are required to be interconnected so that they can share data and computed results. This requires an interconnection network for connecting a large number of computers (which can go up to several thousand processors). If number of links that are available between various computers are less, then the bandwidth available for the interconnection will be less. This will lead to slowing down of the computational speed of the computers, as sharing of data and results will be at slow speed. Thus, any method that will increase the bandwidth of the interconnection network will lead to improved latency (viz mean number of hops propagated by the packets) of the network. In addition to that, the increase in number of redundant links leads to higher fault tolerance as failure of few links can be tolerated due to the fact that alternate links are available to the failed links.
The bandwidth of the internet traffic is increasing steadily at exponential rates. Large scale high performance routing systems used in this internet traffic require high bandwidth and low latency interconnection networks. To satisfy it, researchers are working to design the high performance switching fabrics. The performance of the switching fabric in the routers depends in the efficiency of information exchange between the number of users. It is therefore, essential to design an interconnect structure with high scalability, high bandwidth and low latency. Packet switching technology is used to build the high performance switching fabrics. In the recent years,
fiber optic communication with optical packet switching promise to meet the features like parallelism, large bandwidth, and low power requirement.
The data vortex architecture is specifically designed as a packet switched interconnection network for optical implementation. This interconnect architecture supports scalability and eliminates buffers. In the data vortex architecture, routine decisions are carried out by high speed digital electronic circuitry while data packets remain in optical domain. Optical payload is encoded in multiple wavelengths in order to maximize the transmission capacity.
The data vortex is multiple-level minimum logic architecture [1] proposed by Yang et al.. as a scalable, ultra high capacity, high throughput, and low latency switch. This system employs 2X2 node with synchronous timing and distributed controlling. Later Neha Sharma et al, proposed a new Augmented Data Vortex [ADV] architecture which employs 3X3 nodes. The additional input output links improves latency. Moreover, the augmented switch fabric improves fault tolerance and reliability than the regular network performs far more reliably than the regular network. A multiplexing scheme at input ports and output ports further enhances fault tolerance and reliability. The main objective of this invention is to develop a qxq optical data vortex switch fabric so as to improve fault tolerance. The architecture and performances of 4x4 switch is discussed as an representative example of qxq switch.
Modern highr-performance computing systems require networks with high capacity extremely high throughput and low latency in order to pass message between thousands of processor and memory elements. Optical Interconnection Networks (OIN) offer a potentially viable solution to this requirement. An all optical packet switched Interconnection Network called a Data Vortex (DV) switch has already been proposed by Yang et al. for the purpose of large scale photonic interconnections. For any interconnection network, fault tolerance and reliability are crucial issues, evaluation of which lacked attention for the case of the DV switch. Based on the
results for fault tolerance and reliability analysis of the primary DV switch, a augmented data vortex (ADV) switch fabric, is further proposed to improve the fault tolerance of the primary DV switch. The performance as regards fault tolerance of the ADV switch was computed and detailed results were obtained. It is found that performance of ADV is investigated with reference to parameters such as latency and injection ratio (throughput) by means of numerical simulations. A uniform random traffic model has been used for the performance evaluation. The results obtained are compared with the results reported for the DV switch. The results show that the ADV switch with enhanced fault tolerance also improves the performance regarding latency. For same switch sizes (i.e. the same number of angles A, and height H) the injection ratios (throughputs) for the DV and ADV switches are comparable. Hence it can serve as a suitable candidate for high performance computing.
The ADV switch network comprises of 3X3 switches as compared to primary DV switch. This increases the number of multiple paths between source and destination pair. Moreover, multiplexing at the inputs and outputs increases fault tolerance.
The ADV consists of routing nodes that lie on a collection of cylinders. The switch fabric is characterized by two parameters 'A' and 'H' representing the number of nodes along the angle and height dimensions, respectively. 'A' is typically set to be small odd number (<10) and is independent of 'H'. The available number of input-output (I/O) ports is given by AxH. The number of cylinder levels C is C=log2H+l numbered from 0 to log2H, where 0 is the input level and log2 H is the output level. Each routing node is labeled uniquely by the co-ordinates (a,c,h), where 0≤a10 k) while maintaining low latencies, a narrow latency distribution, and high throughput. To facilitate optical implementation, the data-vortex architecture employs a novel hierarchical topology, traffic control, and synchronous timing that act to reduce the necessary routing logic operations and buffering. As a result of this architecture, all routing decisions of the data packets are based on a single logic operation at each node. The routing is further simplified by the employment of wavelength division multiplexing (WDM)-encoded header bits, which enable packet-header processing by simple wavelength filtering. The packet payload remains in the optical domain as it propagates through the data-vortex switch fabric, exploiting the transparency and high bandwidths achievable in fiber optic transmission. In this paper, the author discussed numerical simulations of the data-vortex performance and report results from an experimental investigation of multihop WDM packet routing in a re-circulating test bed.
Later, Q. Yang and K. Bergman, "TRAFFIC CONTROL AND WDM ROUTING IN THE DATA VORTEX PACKET SWITCH","IEEE photon. Technol.
Lett., vol. 14, pp.236-238, Feb.2002, demonstrated the traffic control mechanism between two nodes of data vortex switch and studied the effective elimination of packet conflict within the architecture.
W. lu, B.A. Small, O. Liboiron-Ladouceur, J.N. Kutz, and K. Bergman, "OPTICAL PACKET SWITCHING THROUGH MULTIPLE NODES IN THE DATA VORTEX ARCHITECTURE"," proc. Ann. Meeting IEEE lasers and Electro-Optics Soc. (LEOS '03), vol.1, MF2, pp. 53-54, Oct. 2003, demonstrated control signaling among four nodes in the data vortex switch.
B.A. Small, A. Shacham, K. Bergman, K. Athikulwongse, C. Hawkins, and D. S. Wills, "EMULATION OF REALISTIC NETWORK TRAFFIC PATTERNS ON AN EIGHT-NODE DATA VORTEX INTERCONNECT NETWORK SUBSYSTEM,"" J. Opt. Netw., vol.3, no. 11, pp.802-809, Nov. 2004, demonstrated that the data vortex architecture is a feasible candidate as an optical interconnection network for realistic supercomputing traffic.
Assaf Shachem, Benjamin A. Small, et al, ""A FULLY IMPLEMENTED 12X12 DATA VORTEX OPTICAL PACKET SWITCHING INTERCONNECTION NETWORK"", IEEE Photonics Technology Letters, vol. 23, no. 10, pp.3066-3075, Oct. 2005. Analyzed data vortex architecture and demonstrated the experimental studies with twelve input/output nodes.
Q. Yang in "BACK PRESSURE REDUCTION WITH BI-DIRECTIONAL DATA VORTEX NETWORK", International conference on photonics switching 2006. PS '06, pp. 1-3 modified the data vortex architecture so as to improve the latency and throughput.
Rajat Kumar Singh, Rajiv Srivastava and Yatindra Nath Singh "A NEW APPROACH TO THE DATA VORTEX SWITCH ARCHITECTURE", international conference on photonics switching 2006. PS 06 Proc. of conference, Photonics 2006,
modified the data vortex switch with extra delay lines and discussed the performance improvement.
A. Shachem and Karen Bergman "ON CONTENTION RESOLUTION IN THE DATA VORTEX OPTICAL INTERCONNECTION NETWORK", Journal of optical networking Vol.6, No. 6, June 2007, presented the alternative contention resolution techniques in data vortex interconnection network and their performance are evaluated.
Neha Sharma, D.Chadha, V.Chandra, in "THE AUGMENTED DATA VORTEX SWITCH FABRIC: AN ALL OPTICAL PACKET SWITCHED INTERCONNECTION NETWORK WITH ENHANCED FAULT TOLERANCE", Jr. of Optical Switching and Networking 4,92-105 (2007), proposed a new Augmented data vortex switch fabric which incorporates all the advantageous features of data vortex switch. This network utilizes augmented links in addition to the already existing links to route the packet when the faults occur.
Later on, Coke S. Reed in his patent US 7068671 dated June 27, 2006 on "MULTIPLE LEVEL MINIMUM LOGIC NETWORK", focused on a network or interconnect structure which utilizes a data flow technique that is based on timing and positioning of message communication through the interconnect structure. Switching control is distributed through multiple nodes in the structure so that a supervisory controller providing a global control function and complex logic structure are avoided. The interconnect structure operates as a "deflection" or "hot potato" system in which processing and storage overhead at each node is minimized. Elimination of a global controller and buffering at the nodes greatly reduces the amount of control and logic structures in the interconnect structure, simplifying overall control components and network interconnect components and improving speed performance of message communication to multiple level interconnection structure in which control and logic circuits are minimized.
Coke S. Reed, Austin, TX (US) in their United States patent application US 2009/0070487 dated March 12, 2009 on ""METHOD AND DEVICE FOR DISTRIBUTING DATA ACROSS NETWORK COMPONENTS", illustrated a network device and associated operating methods interface to a network. A network interface comprises a plurality of registers that receive data from a plurality of data sending devices and arrange the received data into least a target address field and a data field, and a plurality of spreader units coupled to the register plurality that forward the data based on logic internal to the spreader units and spread the data wherein structure characteristic to the data is removed. A plurality of switches is coupled to the spreader unit plurality and forwards the data based on the target address field.
Advances in technology result in improvement in computer system performance. However, the manner in which these technological advances are implemented will greatly determine the extent of improvement in performance. The advancement of Technology resulted in the development of 4X4 optical data vortex switch which has been reported by R.G.Sangeetha, Neha Sharma, D Chadha and Vinod Chandra, in "4x4 Optical Data Vortex Switch Fabric: A Fault Tolerance Study" Proc. Of the Conference, Photonics 2008. Dec. 15-17,2008, New Delhi, India.
A new 4x4 Optical switch (ODV) is developed to increase the fault tolerance and reliability. For this number of paths between the source and destination nodes are increases. The number of paths between the source and destination node in data Vortex switch is 18 and that of augmented data vortex switch is 72. In 4x4 optical data vortex switch the number of paths are 162. The increase in number of paths increases the reliability of the interconnection network.
The paths provided in 4x4 ODV provides the alternate paths to all the sub networks of the subsequent stages of the cylinders. In data vortex switch the paths on the same cylinder acts as a temporary buffer, however when there is heavy traffic load
the backpressure is created in the routing nodes. This backpressure mechanism creates longer delay and lower throughput.
In 4x4 ODV, the alternate paths reduce backpressure. This new design improves the fault tolerance and reliability.
Objects of Invention:
The primary objective of the invention is to design a qxq optical switch for fault tolerant routing of data communication
One of the principle objectives of this invention is to achieve a better performance by using nodes in the interconnection network which has (qxq) input/output links
Another objective of this invention is to propose a novel method for increasing the bandwidth of the interconnection network.
Still another objective is to use the new method of increasing bandwidth in addition to the spreader.
An additional objective of this invention is to develop an associated hardware implementation scheme for implementing qxq node for the interconnection network for qxq switching node. .
Still another additional objective of this invention is to design a priority scheme required to minimize the congestion of the interconnection network.
A further objective of this invention is to increase the number of links in each of the nodes of the interconnection network from minimum requirement (2x2) to (qxq)
Still further objective of this invention is to give advantage of higher bandwidth, greater fault tolerance and lower latency by increasing the number of links at each node in the interconnection network.
One of the allied objectives of this invention is to propose one additional path in the existing architecture to enhance fault tolerance in the Data Vortex Switch.
These and other objectives & advantages of the invention will be apparent from the ensuing description
Brief Description of Accompanying Drawings:
Further objects and advantages of this invention will be more apparent from the ensuing description when read in conjunction with the accompanying drawings wherein:
Fig 1: Optical switch for fault tolerant routing of data communication
Fig 2: Cylindrical representation of primary DV, ADV, paths in 4x4 DV switch
Fig 3: Fault Model
Fig 4: Multiplexing 2 inputs
Fig 5: Multiplexing 3 inputs
Fig 6: Multiplexing 4 inputs
Fig 7: Multiplexing 6 inputs
Fig 8: Routing paths of 4x4 ODV between the cylinders
Fig 9: Equivalent Planer Diagram of 4x4 ODV switch
Fig 10: Hardware model of 4x4 node
Fig 10: Hardware model of qxq node
Fig 11: Priority Scheme of 4x4 ODV switch
While the invention is described in conjunction with the illustrated embodiment, it is understood that it is not intended to limit the invention to such embodiment. On the contrary, it is intended to cover all alternatives, modifications and equivalents as may be included within the spirit and scope of the invention disclosure as defined by the claims.
Statement of the Invention
Accordingly, there is provided An Optical qxq switch for fault tolerance routing of data communication comprising a plurality of nodes and a plurality of interconnects selectively coupling the nodes in the interconnect structure having qxq input/output links, in particular 4x4 optical data vortex switch, thereby increasing the bandwidth of the interconnection network in addition to the spreader for sending data between computers of the high performance parallel system.
Detailed Description of the Invention:
At the outset of the description, which follows, it is to be understood that the ensuing description only illustrates a particular form of invention. However, such a particular form is only an exemplary embodiment and the teachings of the invention are not intended to be taken restrictively. The qxq node is illustrated with an example of 4x4 node.
In 4x4 Optical data vortex switch, the packets are routed through the nodes interconnected with the fibers. In DV, each node is connected to two input ports and two output ports. There are two different paths; one path progresses towards the next cylinder level, towards its egress node and other path is deflected towards the same cylinder level to create a buffer so that the packets are routed on the same cylinder
level until it gets desired path to reach the destination.ADV employs 3x3 node with two paths proceeding to the next cylinder level and one path for buffering. Similarly, in the 4x4 ODV, three paths proceed to next cylinder level and one path in the same level. The additional path to next level increases the accessibility of the output terminal. Also additional input and output ports are provided through multiplexing, which in turn increases the number of paths between the input and output terminals.
Routing on the same cylinder level, packet from node (a,c,h) moves to node (a+1, c, hn). The nodes present in next cylinder levels are interconnected to route the pocket towards the destination. In 4x4 ODV, three interconnections are provided between the adjacent cylinders at each node to the next angles at three different heights. For example in Fig. 2 node '0'(0,0,0) has three links in the next cylinder level (c=l) connected to next angle (a=l) at nodes 16 (h=0,) 17(h=l) and 18(h=2). In 4x4 switch the interconnection link provided to node 16 is same as for DV, the link to node 18 is same as for ADV and the link to node 17 is the extra connection provided. The routing paths on the adjacent cylinder level are shown in Fig.8. This has been shown for the case of H= 4, A=3 and the same routing procedure can be extended to higher parameters as well.
In the DV architecture, the routing of packets in the same level results in back pressure. Such back pressure mechanism can result in accumulation of packets in the current cylinder level rather than forwarding it to the next cylinder level. The deflected packet remains on the same cylinder level, for two hops before it traces the desired output link. In ADV, the deflected packet traces its desired output link from next node itself. In 4x4 ODV, every node provides for an alternate path, thereby reducing the back pressure. As the number of input and output ports is increased we need to provide a new possible hardware implementation scheme and to ensure that there is no new priority scheme is included.
The hardware implementation scheme:
The hardware implementation and priority is explained in this section. Similar to DV and ADV, 4x4 ODV is an all optical switch fabric but the control of the switching operation is performed electronically. As shown in Fig. 10, after the packet enters the input node from any of the three input port, the optical power tap extracts the frame and header bits. The frame bit indicates the presence of the packet, whereas the header bits carry the routing decision. The control wavelengths, one of the headers and another of the frame are filtered with optical pass band filters. For the present case of c=3, for cylinder c=0 and c=l, the frame wavelength and one header bit is extracted. The frame is matched against logical '1' and header bit is matched with the most significant bit of the node height. But in cylinder c=2, two header bits are extracted and matched with the node angle. The control wavelengths are detected using suitable optical detectors and directed to electronic decision circuitry. The detected signals are matched with the node height. If it matches, any one of the output links can be chosen according to the priority scheme. The payload is routed through # 1 combiner, which in turn is directed to semiconductor optical amplifiers (SOAs). The deflection signal from the neighboring output node is sent to the electronic decision circuitry. Thus, the decision is carried out using the header signal and the deflection signal. If header signal is not matched, then the packet is routed to the chain link and the deflection signal is transmitted appropriately. For proper scheduling, the control signal is sent at appropriate delay, according to the priority scheme.
New priority scheme:
To ensure contention resolution, priority schemes proposed earlier in DV and in ADV are modified. The system is synchronous, consequently each packet is moved per time slot, and control messages are exchanged between the nodes. The controlled signals are moved before the packet arrives at given node. The routing priority of the packet is : first priority is given to the packet moving on the same cylinder; second
priority is given to the packet moving on the adjacent cylinder at the same height; third priority is given to the packet moving on the adjacent cylinder at the next height; the fourth priority is given to the packet moving on the adjacent cylinder at the next higher height.
In Fig 12, for example, any node A on an inner cylinder (c=l) has to pass the packet to node N on the same cylinder level, then A sends the control signal to B on the outer cylinder level (c=0), which in turn sends the signal to C and D on the same cylinder that blocks the data at B, C, and D from progressing to N. if there is no packet to pass from A to N then A sends the control signal to B to send the packet to N. If B has a packet to send to N, it sends the control signal to C and D to deflect the packet on the same level. If there is no packet from A and B to N, then A sends the control signal to B which in turn sends the signal to C to pass the packet, C sends the control signal to D to deflect the same on the same level. If there is no packet from A, B and C to N, then the control signal is sent to D to pass the packet. In this way, control signal allows only one packet at each time slot and eliminates contention. For the sake of clarity in Fig. 12, optical and electrical control signals are shown only for node N.
To summarize, Primary DV provides a path from all source nodes to all destination nodes. ADV provides alternate paths for all the source nodes
It is to be further understood that present invention is susceptible to modifications, adaptation, and changes by those skilled in art. Such variant embodiments employing the concepts and features of this invention are intended to be within the scope of the present invention which is further set forth under the following claims:
Performance Analysis:
The analysis of performance of 4x4 data vortex switch fabric is given below relating to fault tolerance and reliability:
a) Multiplexing of Input and Output Elements for Improved Fault Tolerance:
The 4X4 switch proposes 4 input and 4 output elements multiplexing input elements and multiplexing output elements. Among the 4 input elements, one element is considered as chain in element and among the 4 output elements, one element is considered as chain-out element to maintain the data vortex architecture.
Multiplexing at the input stage provides improved accessibility to the network. Multiplexing output elements provides improved accessibility to the external devices. Each node has 3 input and 3 output elements other than chain in element and chain out element. If fault occurs in any one input link, the other link can be used for the connection. If all the 3 links are failed then that node becomes useless and the path through that node is also terminated. To avoid this 2 input nodes are multiplexed. The 2 nodes will now have 6 links and, therefore, can tolerate up to 5 failures when the 2 nodes are multiplexed. Similarly, the switch fabric can tolerate 8, 11, 17, 35 link failures when 3, 4, 6, 12 input nodes multiplexed. Thus the multiplexing at the input improves fault tolerance.
Similarly, at the output stage also, each node has 3 outputs, excluding chain-out element. Here all the three outputs and chain out element are multiplexed using extra switches, so as to tolerate 3 faults, instead of two faults. Thus the multiplexing at the output stage also improves fault tolerance.
b) Fault Model
Fault model is required for the analysis of fault tolerance and reliability. This model is a modified model for the 4X4 switch, similar to that adopted in with suitable modifications for 4X4 switch.
Similar to primary DV and ADV, the 4X4 switch uses distributed control signaling. In addition to data input/output elements, each node has control input and control output links so as to provide distributed control signaling and to avoid contention. But for the fault tolerance analysis, the data links are considered only.
The fault model Fig 3 gives an idea that how input /output links and chain-in, chain-out element is provided in each switching node. It also shows that fault disables only that link and not the entire switching node.
A network is fault tolerant if and only if it can still provide connectivity between any input/output pair of the network with faulty elements in the network. In this analysis the number of faulty elements which cause the network to fail is assumed to be random variable. Here, the expected number of faulty elements are found that cause the switch to fail.
An output element of the first stage is the input element of the next stage. Only output element are considered and chain out elements in each stage along with input element of first stage when the total number of element is counted. For this analysis, the elements of the entire network are sub-divided as
1) Input elements of level 0 (stage 1).
2) The output elements and chain-out elements of the intermediate stages (Log2 H-1).
3) The output and chain out elements of the output level.7
c) Upper Bound on the Tolerable Faulty Elements in 4X4 Switch.
Input level: For the input level, the number of tolerable faulty elements depend on the multiplexing the number of input nodes as explained in accompanying figures.
Case (1): Multiplexing 2 Inputs (as shown in Fig 4):
For the input level, with multiplexing 2 input nodes, each input device can access 6 input elements in level 0.
Case (2): Multiplexing 3 Inputs (as shown in Fig 5):
For the input level, with input multiplexing 3 input nodes, number of elements that can be accessed in level 0 are equal to 9.
Case (3): Multiplexing 4 Inputs (as shown in Fig 6):
For the input level, with input multiplexing 3 input nodes, number of elements that can be accessed in level 0 are equal to 12.
Case (4): Multiplexing 6 Inputs(as shown in Fig7):
For the input level, with input multiplexing 3 input nodes, number of elements that can be accessed in level 0 are equal to 18.
Case (5): Multiplexing 12 Inputs(as given in Table A):
For the input level, with input multiplexing 3 input nodes, number of elements that can be accessed in level 0 are equal to 36.
Output level: For the output level the chain out and output elements of each node are multiplexed. Thus in the output stage the number of tolerable faults are, i3 max =3N elements. Similar figures can be drawn for output levels also.
Intermediate level: The analysis for the intermediate level and output level is carried out similar to that of ADV switch with the required modifications for 4X4 switch.
The maximum number of tolerable faulty elements is calculated for higher values of N and is tabulated in Table A.
Table A: Maximum Number of Tolerable Faulty Elements
(Table Removed)
d) Hardware Complexity
The hardware cost of the network is "the number of nodes X number of elements within the node".
Number of nodes N= A X H Number of elements in a primary DV node=2X2 Number of elements in a ADV node=3X3 Number of elements in a 4X4 node = 4X4
The hardware cost of primary DV is 4units,the hardware cost of ADV is 9 units, and that of 4X4 is 16units.
The number of tolerable faulty elements for 4X4 switch is higher than ADV and DV. It is desirable to compare the hardware complexity of these three switches to estimate hardware cost for increased fault tolerance. The hard ware cost for primary DV, ADV and 4X4 has been tabulated in Table B.
Table B: Hardware Cost for Primary DV, ADV and 4X4 switch.
(Table Removed)
Advantages of the invention:
The biggest advantage of the invention is that it attempts to achieve better performance by using (qxq) nodes in the interconnection network which has (q x q) input/output links thereby increasing the bandwidth of the interconnection network which could be used in addition to the spreader as proposed in the earlier patent (US 2009/0070487).
The instant method further increases the bandwidth of the interconnection network, by providing a large number of redundant links (in addition to the primary link) for sending data between computers of the high performance parallel computing system. The interconnection network with (qxq) input/output links cannot be obtained by cascading in series/parallel (2x2) node, instead the node
itself has to be upgraded to a (q x q) /(4x4) links with the change in hardware and priority scheme. This method is quite distinctive on account of the following innovations:
Routing Path of 4x4 ODV between the cylinders:
The number of links in each of the nodes of the interconnection network are increased from minimum requirement (2x2) to (qxq), i.e., each interconnection network node, there will be 'q' inputs and 'q' outputs. In the example (Fig. 8) the interconnection for 4x4 links are shown. Among the 4 input links, one link is considered as chain in element and among the 4 output links, one link is considered as chain-out link to maintain the data vortex architecture. Each node has 3 input and 3 output links other than chain in link and chain out link. If fault occurs in any one input link, the other link can be used for the connection. If all the 3 links are failed then that node becomes useless and the link through that node is also terminated.
In order to identify the intermediate links between the cylinders and to analyze the architecture in view of fault tolerance the 3- dimensional architecture is converted to a 2-dimensional planar diagram to indicate all the nodes and the paths between the source and the destination. In Fig.9, the planar diagram of 4x4 ODV with A-3;H=4 is presented with additional links. In Fig.9, the outermost cylinder level, intermediate cylinder level and the innermost cylinder level are represented as c=0, the c=l and c=2 respectively. The total number of interconnection links between the input and output for the example network given in fig.2 can be calculated as 162 interconnection links. This is compared to the case of prior art with 2x2 input output links, where the total number of interconnection links are 18.
Hardware model of qxq node:
The hardware implementation for 4x4 switching node is shown in Fig. 10. This implementation can be extended to (qxq) switching node (Fig. 11). The packets are routed through the nodes interconnected with the fibers. In DV, each node is connected to two input link and two output links. There are two different links; one-link progresses towards the next cylinder level, towards its egress node and the other link is deflected towards the same cylinder level to create a buffer so that the packets are routed on the same cylinder level until it gets the desired link to reach the destination. Similarly, in the 4x4 ODV, three links proceed to next cylinder level and one link in the same level. The additional link to next level increases the accessibility of the output terminal.
Similar to DV and ADV, 4x4 ODV is an all optical switch fabric but the control of the switching operation is performed electronically. As shown in Fig. 10, after the packet enters the input node from any of the three input links, the optical power tap extracts the frame and header bits. The frame bit indicates the presence of the packet, whereas the header bits carry the routing decision. The control wavelengths, one of the headers and another of the frame are filtered with optical pass band filters. For the present case of three cylinders, for cylinder c=0 and c=l, the frame wavelength and one header bit is extracted. The frame is matched against logical "1" and header bit is matched with the most significant bit of the node height. But in cylinder c=2, two header bits are extracted and matched with the node angle. The control wavelengths are detected using suitable optical detectors and directed to electronic decision circuitry. The detected signals are matched with the node height. If it matches, any one of the output links can be chosen according to the priority scheme. The payload is routed through 207 combiner, which in turn is
directed to semiconductor optical amplifiers (SOAs). The deflection signal from the neighboring output node is sent to the electronic decision circuitry. Thus, the routing decision is carried out using the header signal and the deflection signal. If header signal is not matched, then the packet is routed to the chain-out link and the deflection signal is transmitted appropriately. For proper scheduling, the control signal is sent at appropriate delay, according to the priority scheme.
Priority scheme of 4x4 ODV switch:
Priority scheme is required to minimize the congestion in the interconnection network. The priority scheme for 4x4 node interconnection network is shown in Fig. 12. This can be extended to qxq node. The control signals are moved before the packet arrives at given node. The routing priority of the packet is: first priority is given to the packet moving on the same cylinder (as in U.S. Patent 5996020 Nov.30, 1999); second priority is given to the packet moving on the adjacent cylinder at the same height (as in U.S. Patent 5996020 Nov.30, 1999); third priority is given to the packet moving on the adjacent cylinder at the next height; the fourth priority is given to the packet moving on the adjacent cylinder at the next higher height.
In Fig.12, for example, any node A on an inner cylinder (c=l) has to pass the packet to node N on the same cylinder level, then A sends the control signal to B on the outer cylinder level (c=0), which in turn sends the signal to C and D on the same cylinder that blocks the data at B, C, and D from progressing to N. If there is no packet to pass from A to N then A sends the control signal to B to send the packet to N. lf B has a packet to send to N, it
sends the control signal to C and D to deflect the packet on the same level. If there is no packet from A and B to N, then A sends the control signal to B which in turn sends the signal to C to pass the packet, C sends the control signal to D to deflect on the same level. If there is no packet from A, B and C to N then the control signal is sent to D to pass the packet. In this way, control signal allows only one packet at each time slot and eliminates contention. For the sake of clarity in Fig. 12, optical and electrical control signals are shown only for node N.
Additional Advantages:
In the interconnection network, in each node the number of links is increased and therefore more number of links are available. This gives advantage of higher bandwidth, greater fault tolerance and lower latency. This is shown in Table 1. The increased fault tolerance is shown in Table A.
Table C: Comparison of prior art with the claim of 4x4 (q x q switch)
Table D: Maximum number of tolerable faulty links
(Table Removed)
art with 4x4 interconnection switch. The results of the simulation studies with respect to throughput (bandwidth), fault tolerance, latency are summarized in Table C and Table D. From Table D, it is observed that, in primary DV the tolerable faulty links increases from 33 to 44 for the inputs without multiplexing to full multiplexing. But with 4x4 interconnection nodes, the tolerable faulty links substantially increases from 44 to 140. Similarly for N=20 and N=56, the tolerable faulty links increases from 76 to 236 and 272 to 832 respectively.
Legends Used:
(Table Removed)
WE CLAIM
1. An Optical qxq switch for fault tolerance routing of data communication comprising a plurality of nodes and a plurality of interconnects selectively coupling the nodes in the interconnect structure having qxq input/output links, in particular 4x4 optical data vortex switch, thereby increasing the bandwidth of the interconnection network in addition to the spreader for sending data between computers of the high performance parallel system.
2. An optical qxq switch as claimed in Claim 1 wherein the packets are routed through the nodes interconnected with the optical fibers.
3. An optical qxq switch as claimed in claim 1 wherein out of q links, one link is connected on the same cylinder level and remaining (q-1) links are connected to the nodes in the next cylinder level, depending upon interconnection pattern using optical fibers, thereby increasing the accessibility of the output terminal and providing additional inputs and output ports through multiplexing thereby increasing the number of paths between the input and output terminals.
4. An optical qxq switch as claimed in claim 1 wherein every node provides for an alternate paths thereby reducing the back pressure by providing a novel hardware implementation scheme.
5. A novel hardware implementation scheme for optical qxq switch as claimed in
claim 1 and claim 4 wherein the optical data vortex is an optical switch fabric but the control of the switching operation is performed electronically after the packet enters the input nodes from any of (q-1) input port and the optical power tap extracting the frame and header bits carrying the routing decision.
6. A novel hardware implementation scheme as claimed in claim 4 and Claim 5 wherein frame control wavelengths, one of the headers and another of the frame are filtered through optical pass band filters.
7. A novel hardware scheme as claimed in claim 4,5 and 6 wherein the control wavelengths are detected using suitable optical detectors and directed to electronic decision circuitry by matching detected signals with the node height selecting any one of the output links depending upon such match according to the priority scheme.
8. A novel hardware scheme as claimed in preceeding claims wherein the decision in the said electronic decision circuitry is carried out using the header signal and deflection signal enabling the packet routing to the chain link and deflection signal transmitted appropriately, when all the (q-1) paths are not available.
9. A novel hardware scheme as claimed in proceeding claims wherein the deflection signal from the neighboring output node is sent to the electronic decision circuitry and payload is directed to semiconductor optical amplifiers by means of the combiner
10. The priority scheme as claimed in claim 7 wherein the system is synchronous consequent to which each packet is moved per time slot and control messages exchanging between the nodes.
11. The priority scheme for the said novel hardware scheme for an optical qxq switch as claimed in proceeding claims wherein the control signal is sent to the next node before the arrival of packet at given node.
12. The priority scheme as claimed in proceeding claims wherein the routing priority of the packet is given to the packet moving either on same cylinder or the adjacent cylinder at different heights sequentially in the order of their increasing height.
13. An optical qxq switch as claimed in proceeding claims providing alternate paths for all the source nodes.
14. An optical qxq switch as claimed in proceeding claims substantially described herein with reference to accompanying drawings for fault tolerance routing of data communication.
| Section | Controller | Decision Date |
|---|---|---|
| 15 | THIYAGARAJA GUPTHA DHAYANANDAN | 2020-10-16 |
| 15 | THIYAGARAJA GUPTHA DHAYANANDAN | 2022-08-26 |
| # | Name | Date |
|---|---|---|
| 1 | 1801-DEL-2011-Form-1-(23-08-2011).pdf | 2011-08-23 |
| 1 | 1801-DEL-2011-FORM-15 [23-01-2024(online)].pdf | 2024-01-23 |
| 1 | 1801-DEL-2011-RELEVANT DOCUMENTS [09-01-2025(online)].pdf | 2025-01-09 |
| 2 | 1801-DEL-2011-Correspondence-Others-(23-08-2011).pdf | 2011-08-23 |
| 2 | 1801-DEL-2011-FORM-15 [23-01-2024(online)].pdf | 2024-01-23 |
| 2 | 1801-DEL-2011-POWER OF AUTHORITY [23-01-2024(online)].pdf | 2024-01-23 |
| 3 | 1801-del-2011-Form-3.pdf | 2012-02-27 |
| 3 | 1801-DEL-2011-POWER OF AUTHORITY [23-01-2024(online)].pdf | 2024-01-23 |
| 3 | 1801-DEL-2011-RELEVANT DOCUMENTS [23-01-2024(online)].pdf | 2024-01-23 |
| 4 | 1801-DEL-2011-RELEVANT DOCUMENTS [23-01-2024(online)].pdf | 2024-01-23 |
| 4 | 1801-DEL-2011-IntimationOfGrant26-08-2022.pdf | 2022-08-26 |
| 4 | 1801-del-2011-Form-2.pdf | 2012-02-27 |
| 5 | 1801-DEL-2011-PatentCertificate26-08-2022.pdf | 2022-08-26 |
| 5 | 1801-DEL-2011-IntimationOfGrant26-08-2022.pdf | 2022-08-26 |
| 5 | 1801-del-2011-Form-1.pdf | 2012-02-27 |
| 6 | 1801-DEL-2011-PatentCertificate26-08-2022.pdf | 2022-08-26 |
| 6 | 1801-del-2011-Drawings.pdf | 2012-02-27 |
| 6 | 1801-DEL-2011-Annexure [08-07-2022(online)].pdf | 2022-07-08 |
| 7 | 1801-DEL-2011-Written submissions and relevant documents [08-07-2022(online)].pdf | 2022-07-08 |
| 7 | 1801-del-2011-Description (Provisional).pdf | 2012-02-27 |
| 7 | 1801-DEL-2011-Annexure [08-07-2022(online)].pdf | 2022-07-08 |
| 8 | 1801-DEL-2011-Correspondence to notify the Controller [09-06-2022(online)].pdf | 2022-06-09 |
| 8 | 1801-del-2011-Correspondence-others.pdf | 2012-02-27 |
| 8 | 1801-DEL-2011-Written submissions and relevant documents [08-07-2022(online)].pdf | 2022-07-08 |
| 9 | 1801-del-2011-Abstract.pdf | 2012-02-27 |
| 9 | 1801-DEL-2011-Correspondence to notify the Controller [09-06-2022(online)].pdf | 2022-06-09 |
| 9 | 1801-DEL-2011-ReviewPetition-HearingNotice-(HearingDate-23-06-2022).pdf | 2022-05-30 |
| 10 | 1801-DEL-2011-FORM-24 [28-05-2022(online)].pdf | 2022-05-28 |
| 10 | 1801-del-2011-GPA-(01-05-2012).pdf | 2012-05-01 |
| 10 | 1801-DEL-2011-ReviewPetition-HearingNotice-(HearingDate-23-06-2022).pdf | 2022-05-30 |
| 11 | 1801-del-2011-Correspondence Others-(01-05-2012).pdf | 2012-05-01 |
| 11 | 1801-DEL-2011-FORM-24 [28-05-2022(online)].pdf | 2022-05-28 |
| 11 | 1801-DEL-2011-RELEVANT DOCUMENTS [28-05-2022(online)].pdf | 2022-05-28 |
| 12 | 1801-DEL-2011-EDUCATIONAL INSTITUTION(S) [26-05-2022(online)].pdf | 2022-05-26 |
| 12 | 1801-DEL-2011-Form-5-(22-06-2012).pdf | 2012-06-22 |
| 12 | 1801-DEL-2011-RELEVANT DOCUMENTS [28-05-2022(online)].pdf | 2022-05-28 |
| 13 | 1801-DEL-2011-Form-2-(22-06-2012).pdf | 2012-06-22 |
| 13 | 1801-DEL-2011-EDUCATIONAL INSTITUTION(S) [26-05-2022(online)].pdf | 2022-05-26 |
| 13 | 1801-DEL-2011-Annexure [19-10-2020(online)].pdf | 2020-10-19 |
| 14 | 1801-DEL-2011-Annexure [19-10-2020(online)].pdf | 2020-10-19 |
| 14 | 1801-DEL-2011-Covering Letter [19-10-2020(online)].pdf | 2020-10-19 |
| 14 | 1801-DEL-2011-Drawings-(22-06-2012).pdf | 2012-06-22 |
| 15 | 1801-DEL-2011-Covering Letter [19-10-2020(online)].pdf | 2020-10-19 |
| 15 | 1801-DEL-2011-Description (Complete)-(22-06-2012).pdf | 2012-06-22 |
| 15 | 1801-DEL-2011-PETITION u-r 6(6) [19-10-2020(online)].pdf | 2020-10-19 |
| 16 | 1801-DEL-2011-Correspondence Others-(22-06-2012).pdf | 2012-06-22 |
| 16 | 1801-DEL-2011-PETITION u-r 6(6) [19-10-2020(online)].pdf | 2020-10-19 |
| 16 | 1801-DEL-2011-Written submissions and relevant documents [19-10-2020(online)].pdf | 2020-10-19 |
| 17 | 1801-DEL-2011-Claims-(22-06-2012).pdf | 2012-06-22 |
| 17 | 1801-DEL-2011-Correspondence to notify the Controller [24-07-2020(online)].pdf | 2020-07-24 |
| 17 | 1801-DEL-2011-Written submissions and relevant documents [19-10-2020(online)].pdf | 2020-10-19 |
| 18 | 1801-DEL-2011-Abstract-(22-06-2012).pdf | 2012-06-22 |
| 18 | 1801-DEL-2011-Correspondence to notify the Controller [24-07-2020(online)].pdf | 2020-07-24 |
| 18 | 1801-DEL-2011-US(14)-HearingNotice-(HearingDate-30-07-2020).pdf | 2020-07-09 |
| 19 | 1801-DEL-2011-Correspondence-310718.pdf | 2018-08-02 |
| 19 | 1801-del-2011-Form-18-(29-06-2012).pdf | 2012-06-29 |
| 19 | 1801-DEL-2011-US(14)-HearingNotice-(HearingDate-30-07-2020).pdf | 2020-07-09 |
| 20 | 1801-del-2011-Correspondence Others-(29-06-2012).pdf | 2012-06-29 |
| 20 | 1801-DEL-2011-Correspondence-310718.pdf | 2018-08-02 |
| 20 | 1801-DEL-2011-Power of Attorney-310718.pdf | 2018-08-02 |
| 21 | 1801-DEL-2011-Power of Attorney-310718.pdf | 2018-08-02 |
| 21 | 1801-DEL-2011-FER.pdf | 2018-01-29 |
| 21 | 1801-DEL-2011-ABSTRACT [27-07-2018(online)].pdf | 2018-07-27 |
| 22 | 1801-DEL-2011-ABSTRACT [27-07-2018(online)].pdf | 2018-07-27 |
| 22 | 1801-DEL-2011-CLAIMS [27-07-2018(online)].pdf | 2018-07-27 |
| 22 | 1801-DEL-2011-OTHERS [27-07-2018(online)].pdf | 2018-07-27 |
| 23 | 1801-DEL-2011-CLAIMS [27-07-2018(online)].pdf | 2018-07-27 |
| 23 | 1801-DEL-2011-COMPLETE SPECIFICATION [27-07-2018(online)].pdf | 2018-07-27 |
| 23 | 1801-DEL-2011-FORM-26 [27-07-2018(online)].pdf | 2018-07-27 |
| 24 | 1801-DEL-2011-FORM 3 [27-07-2018(online)].pdf | 2018-07-27 |
| 24 | 1801-DEL-2011-ENDORSEMENT BY INVENTORS [27-07-2018(online)].pdf | 2018-07-27 |
| 24 | 1801-DEL-2011-COMPLETE SPECIFICATION [27-07-2018(online)].pdf | 2018-07-27 |
| 25 | 1801-DEL-2011-ENDORSEMENT BY INVENTORS [27-07-2018(online)].pdf | 2018-07-27 |
| 25 | 1801-DEL-2011-FER_SER_REPLY [27-07-2018(online)].pdf | 2018-07-27 |
| 26 | 1801-DEL-2011-ENDORSEMENT BY INVENTORS [27-07-2018(online)].pdf | 2018-07-27 |
| 26 | 1801-DEL-2011-FER_SER_REPLY [27-07-2018(online)].pdf | 2018-07-27 |
| 26 | 1801-DEL-2011-FORM 3 [27-07-2018(online)].pdf | 2018-07-27 |
| 27 | 1801-DEL-2011-COMPLETE SPECIFICATION [27-07-2018(online)].pdf | 2018-07-27 |
| 27 | 1801-DEL-2011-FORM 3 [27-07-2018(online)].pdf | 2018-07-27 |
| 27 | 1801-DEL-2011-FORM-26 [27-07-2018(online)].pdf | 2018-07-27 |
| 28 | 1801-DEL-2011-OTHERS [27-07-2018(online)].pdf | 2018-07-27 |
| 28 | 1801-DEL-2011-FORM-26 [27-07-2018(online)].pdf | 2018-07-27 |
| 28 | 1801-DEL-2011-CLAIMS [27-07-2018(online)].pdf | 2018-07-27 |
| 29 | 1801-DEL-2011-ABSTRACT [27-07-2018(online)].pdf | 2018-07-27 |
| 29 | 1801-DEL-2011-FER.pdf | 2018-01-29 |
| 29 | 1801-DEL-2011-OTHERS [27-07-2018(online)].pdf | 2018-07-27 |
| 30 | 1801-del-2011-Correspondence Others-(29-06-2012).pdf | 2012-06-29 |
| 30 | 1801-DEL-2011-FER.pdf | 2018-01-29 |
| 30 | 1801-DEL-2011-Power of Attorney-310718.pdf | 2018-08-02 |
| 31 | 1801-del-2011-Correspondence Others-(29-06-2012).pdf | 2012-06-29 |
| 31 | 1801-DEL-2011-Correspondence-310718.pdf | 2018-08-02 |
| 31 | 1801-del-2011-Form-18-(29-06-2012).pdf | 2012-06-29 |
| 32 | 1801-DEL-2011-Abstract-(22-06-2012).pdf | 2012-06-22 |
| 32 | 1801-del-2011-Form-18-(29-06-2012).pdf | 2012-06-29 |
| 32 | 1801-DEL-2011-US(14)-HearingNotice-(HearingDate-30-07-2020).pdf | 2020-07-09 |
| 33 | 1801-DEL-2011-Abstract-(22-06-2012).pdf | 2012-06-22 |
| 33 | 1801-DEL-2011-Claims-(22-06-2012).pdf | 2012-06-22 |
| 33 | 1801-DEL-2011-Correspondence to notify the Controller [24-07-2020(online)].pdf | 2020-07-24 |
| 34 | 1801-DEL-2011-Written submissions and relevant documents [19-10-2020(online)].pdf | 2020-10-19 |
| 34 | 1801-DEL-2011-Correspondence Others-(22-06-2012).pdf | 2012-06-22 |
| 34 | 1801-DEL-2011-Claims-(22-06-2012).pdf | 2012-06-22 |
| 35 | 1801-DEL-2011-Correspondence Others-(22-06-2012).pdf | 2012-06-22 |
| 35 | 1801-DEL-2011-Description (Complete)-(22-06-2012).pdf | 2012-06-22 |
| 35 | 1801-DEL-2011-PETITION u-r 6(6) [19-10-2020(online)].pdf | 2020-10-19 |
| 36 | 1801-DEL-2011-Description (Complete)-(22-06-2012).pdf | 2012-06-22 |
| 36 | 1801-DEL-2011-Drawings-(22-06-2012).pdf | 2012-06-22 |
| 36 | 1801-DEL-2011-Covering Letter [19-10-2020(online)].pdf | 2020-10-19 |
| 37 | 1801-DEL-2011-Drawings-(22-06-2012).pdf | 2012-06-22 |
| 37 | 1801-DEL-2011-Form-2-(22-06-2012).pdf | 2012-06-22 |
| 37 | 1801-DEL-2011-Annexure [19-10-2020(online)].pdf | 2020-10-19 |
| 38 | 1801-DEL-2011-EDUCATIONAL INSTITUTION(S) [26-05-2022(online)].pdf | 2022-05-26 |
| 38 | 1801-DEL-2011-Form-2-(22-06-2012).pdf | 2012-06-22 |
| 38 | 1801-DEL-2011-Form-5-(22-06-2012).pdf | 2012-06-22 |
| 39 | 1801-del-2011-Correspondence Others-(01-05-2012).pdf | 2012-05-01 |
| 39 | 1801-DEL-2011-Form-5-(22-06-2012).pdf | 2012-06-22 |
| 39 | 1801-DEL-2011-RELEVANT DOCUMENTS [28-05-2022(online)].pdf | 2022-05-28 |
| 40 | 1801-del-2011-Correspondence Others-(01-05-2012).pdf | 2012-05-01 |
| 40 | 1801-DEL-2011-FORM-24 [28-05-2022(online)].pdf | 2022-05-28 |
| 40 | 1801-del-2011-GPA-(01-05-2012).pdf | 2012-05-01 |
| 41 | 1801-del-2011-Abstract.pdf | 2012-02-27 |
| 41 | 1801-del-2011-GPA-(01-05-2012).pdf | 2012-05-01 |
| 41 | 1801-DEL-2011-ReviewPetition-HearingNotice-(HearingDate-23-06-2022).pdf | 2022-05-30 |
| 42 | 1801-del-2011-Abstract.pdf | 2012-02-27 |
| 42 | 1801-DEL-2011-Correspondence to notify the Controller [09-06-2022(online)].pdf | 2022-06-09 |
| 42 | 1801-del-2011-Correspondence-others.pdf | 2012-02-27 |
| 43 | 1801-del-2011-Correspondence-others.pdf | 2012-02-27 |
| 43 | 1801-del-2011-Description (Provisional).pdf | 2012-02-27 |
| 43 | 1801-DEL-2011-Written submissions and relevant documents [08-07-2022(online)].pdf | 2022-07-08 |
| 44 | 1801-DEL-2011-Annexure [08-07-2022(online)].pdf | 2022-07-08 |
| 44 | 1801-del-2011-Description (Provisional).pdf | 2012-02-27 |
| 44 | 1801-del-2011-Drawings.pdf | 2012-02-27 |
| 45 | 1801-del-2011-Drawings.pdf | 2012-02-27 |
| 45 | 1801-del-2011-Form-1.pdf | 2012-02-27 |
| 45 | 1801-DEL-2011-PatentCertificate26-08-2022.pdf | 2022-08-26 |
| 46 | 1801-DEL-2011-IntimationOfGrant26-08-2022.pdf | 2022-08-26 |
| 46 | 1801-del-2011-Form-2.pdf | 2012-02-27 |
| 46 | 1801-del-2011-Form-1.pdf | 2012-02-27 |
| 47 | 1801-DEL-2011-RELEVANT DOCUMENTS [23-01-2024(online)].pdf | 2024-01-23 |
| 47 | 1801-del-2011-Form-3.pdf | 2012-02-27 |
| 47 | 1801-del-2011-Form-2.pdf | 2012-02-27 |
| 48 | 1801-DEL-2011-POWER OF AUTHORITY [23-01-2024(online)].pdf | 2024-01-23 |
| 48 | 1801-del-2011-Form-3.pdf | 2012-02-27 |
| 48 | 1801-DEL-2011-Correspondence-Others-(23-08-2011).pdf | 2011-08-23 |
| 49 | 1801-DEL-2011-FORM-15 [23-01-2024(online)].pdf | 2024-01-23 |
| 49 | 1801-DEL-2011-Form-1-(23-08-2011).pdf | 2011-08-23 |
| 49 | 1801-DEL-2011-Correspondence-Others-(23-08-2011).pdf | 2011-08-23 |
| 50 | 1801-DEL-2011-RELEVANT DOCUMENTS [09-01-2025(online)].pdf | 2025-01-09 |
| 50 | 1801-DEL-2011-Form-1-(23-08-2011).pdf | 2011-08-23 |
| 51 | 1801-DEL-2011-PROOF OF ALTERATION [26-09-2025(online)].pdf | 2025-09-26 |
| 52 | 1801-DEL-2011-FORM 4 [26-09-2025(online)].pdf | 2025-09-26 |
| 53 | 1801-DEL-2011-OTHERS [18-11-2025(online)].pdf | 2025-11-18 |
| 54 | 1801-DEL-2011-EDUCATIONAL INSTITUTION(S) [18-11-2025(online)].pdf | 2025-11-18 |
| 1 | 1801-DEL-2011_14-11-2017.pdf |