Abstract: A data center includes a plurality of computing units that communicate with each other using wireless communication such as high frequency RF wireless communication. The data center may organize the computing units into groups (e.g. racks). In one implementation each group may form a three dimensional structure such as a column having a free space region for accommodating intra group communication among computing units. The data center can include a number of features to facilitate communication including dual use memory for handling computing and buffering tasks failsafe routing mechanisms provisions to address permanent interface and hidden terminal scenarios etc.
DATACENTER USING WIRELESS COMMUNICATION
BACKGROUND
[0001] Data centers traditionally use a hierarchical organization of computing units to
handle computing tasks. In this organization, the data center may include a plurality of
racks. Each rack includes a plurality of computing units (such as a plurality of servers for
implementing a network-accessible service). Each rack may also include a rack-level
switching mechanism for routing data to and from computing units within the rack. One
or more higher-level switching mechanisms may couple the racks together. Hence,
communication between computing units in a data center may involve sending data "up"
and "down" through a hierarchical switching structure. Data centers physically implement
these communication paths using hardwired links.
[0002] The hierarchical organization of computing units has proven effective for many
data center applications. However, it is not without its shortcomings. Among other
potential problems, the hierarchical nature of the switching structure can lead to
bottlenecks in data flow for certain applications, particularly those applications that
involve communication between computing units in different racks.
SUMMARY
[0003] A data center is described herein that includes plural computing units that
interact with each other via wireless communication. Without limitation, for instance, the
data center can implement the wireless communication using high frequency RF signals,
optical signals, etc.
[0004] In one implementation, the data center can include three or more computing
units. Each computing unit may include processing resources, general-purpose memory
resources, and switching resources. Further each computing unit may include two or more
wireless communication elements for wirelessly communicating with at least one other
computing unit. These communication elements implement wireless communication by
providing respective directionally-focused beams, e.g., in one implementation, by using
high-attenuation signals in the range of 57GHz-64GHz.
[0005] According to another illustrative aspect, the data center can include at least one
group of computing units that forms a structure. For example, the structure may form a
column (e.g., a cylinder) having an inner free-space region for accommodating intra-group
communication among computing units within the group.
[0006] According to another illustrative aspect, the computing units can be placed with
respect to each other to avoid permanent interference. Permanent interference exists when
a first computing unit can communicate with a second computing unit, but the second
computing unit cannot directly communicate with the first computing unit.
[0007] According to another illustrative aspect, the computing units form a wireless
switching fabric for transmitting payload data from a source computing unit to a
destination computing unit via (in some cases) at least one intermediary computing unit.
The switching fabric can implement these functions using any type of routing technique or
any combination of routing techniques.
[0008] According to another illustrative aspect, a computing unit that is involved in
transmission of payload data may use at least a portion of its memory resources (if
available) as a buffer for temporarily storing the payload data being transmitted. Thus, the
memory resources of a computing unit can serve both a traditional role in performing
computation and a buffering role.
[0009] According to another illustrative aspect, the computing units are configured to
communicate with each other using a media access protocol that addresses various hidden
terminal scenarios.
[0010] The data center may offer various advantages in different environments.
According to one advantage, the data center more readily and flexibly accommodates
communication among computing units (compared to a fixed hierarchical approach). The
data center can therefore offer improved throughput for many applications. According to
another advantage, the data center can reduce the amount of hardwired links and
specialized routing infrastructure. This feature may lower the cost of the data center, as
well as simplify installation, reconfiguration, and maintenance of the data center.
According to another advantage, the computing units use a relatively low amount of power
in performing wireless communication. This reduces the cost of running the data center.
[0011] The above approach can be manifested in various types of systems, components,
methods, computer readable media, data centers, articles of manufacture, and so on.
[0012] This Summary is provided to introduce a non-exhaustive selection of features
and attendant benefits in a simplified form; these features are further described below in
the Detailed Description. This Summary is not intended to identify key features or
essential features of the claimed subject matter, nor is it intended to be used to limit the
scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] Fig. 1 shows an illustrative computing unit having one or more wireless
communication elements.
[0014] Fig. 2 is a graphical illustration of duplex communication between two
communication elements.
[0015] Fig. 3 shows one implementation of a computing unit that uses a wedge-shaped
housing.
[0016] Fig. 4 shows a collection of components that can be used to implement the
computing unit of Fig. 3.
[0017] Fig. 5 shows one implementation of a computing unit that uses a cube-shaped
housing.
[0018] Fig. 6 shows a collection of components that can be used to implement the
computing unit of Fig. 5.
[0019] Fig. 7 is a three-dimensional view of plural groups of computing units, each
computing unit of the type shown in Figs. 3 and 4.
[0020] Fig. 8 is a cross-section view of two of the groups shown in Fig. 7.
[0021] Fig. 9 shows a data center formed using the type of computing unit shown in
Figs. 5 and 6.
[0022] Fig. 10 is a graphical illustration of permanent interference that affects two
communication elements.
[0023] Fig. 11 is a graphical illustration of a method for deploying a computing unit
within a data center to avoid permanent interface.
[0024] Fig. 12 is a flowchart which complements the graphical illustration of Fig. 11.
[0025] Fig. 13 is a frequency vs. time graph that shows one way of partitioning
communication spectrum into a plurality of slots.
[0026] Fig. 14 is a frequency vs. time graph that shows one way of transmitting control
data and payload data within a data center that uses wireless communication.
[0027] Fig. 15 provides an overview of a signaling protocol that can be used to handle
communication among computing units in a data center, and, in particular, can be used to
address various hidden terminal scenarios.
[0028] Fig. 16 shows a first interaction scenario in which there is no conflict among
communication participants.
[0029] Fig. 17 shows a second interaction scenario in which there is signal overlap, but
still no conflict among communication participants.
[0030] Fig. 18 shows a third interaction scenario for addressing a first type of conflict
(e.g., an "occupied conflict") among communication participants.
[0031] Fig. 19 shows a fourth interaction scenario for addressing a second type of
conflict (e.g., a "covered conflict") among communication participants.
[0032] Fig. 20 is a cross-sectional view of two groups of computing units, indicating
how data can be routed using these computing units.
[0033] Fig. 1 shows a switching fabric that is collectively provided by switching
resources provided by individual computing units in a data center.
[0034] Fig. 22 shows computing units in a group, a first subset of which are assigned for
handling communication in a first direction and a second subset of which are assigned for
handling communication in a second direction.
[0035] Fig. 23 shows a collection of groups of grouping units, indicating how a
switching fabric formed thereby can be used to circumvent computing units having
suboptimal performance.
[0036] The same numbers are used throughout the disclosure and figures to reference
like components and features. Series 100 numbers refer to features originally found in
Fig. 1, series 200 numbers refer to features originally found in Fig. 2, series 300 numbers
refer to features originally found in Fig. 3, and so on.
DETAILED DESCRIPTION
[0037] This disclosure is organized as follows. Section A describes different types of
computing units that provide wireless communication within a data center. Section B
describes illustrative data centers that can be built using the computing units of Section A.
Section C describes functionality for addressing the issue of permanent interference.
Section D describes functionality for implementing signaling among computing units.
Section E provides functionality for routing data within a data center that uses wireless
communication.
[0038] As a preliminary matter, some of the figures describe concepts in the context of
one or more structural components, variously referred to as functionality, modules,
features, elements, etc. The various components shown in the figures can be implemented
in any manner. In one case, the illustrated separation of various components in the figures
into distinct units may reflect the use of corresponding distinct components in an actual
implementation. Alternatively, or in addition, any single component illustrated in the
figures may be implemented by plural actual components. Alternatively, or in addition,
the depiction of any two or more separate components in the figures may reflect different
functions performed by a single actual component.
[0039] Other figures describe the concepts in flowchart form. In this form, certain
operations are described as constituting distinct blocks performed in a certain order. Such
implementations are illustrative and non-limiting. Certain blocks described herein can be
grouped together and performed in a single operation, certain blocks can be broken apart
into plural component blocks, and certain blocks can be performed in an order that differs
from that which is illustrated herein (including a parallel manner of performing the
blocks). The blocks shown in the flowcharts can be implemented in any manner.
[0040] The following explanation may identify one or more features as "optional." This
type of statement is not to be interpreted as an exhaustive indication of features that may
be considered optional; that is, other features can be considered as optional, although not
expressly identified in the text. Similarly, the explanation may indicate that one or more
features can be implemented in the plural (that is, by providing more than one of the
features). This statement is not be interpreted as an exhaustive indication of features that
can be duplicated. Finally, the terms "exemplary" or "illustrative" refer to one
implementation among potentially many implementations.
A. Illustrative Computing units
[0041] Fig. 1 shows a computing unit 102 for use within a data center. The computing
unit 102 includes processing resources 104 and memory resources 106 for together
performing a processing task of any type. For example, the processing resources 104 and
the memory resources 106 may implement one or more applications that can be accessed
by users and other entities via a wide area network (e.g., the Internet) or through any other
coupling mechanism. The processing resources 104 can be implemented by one or more
processing devices (e.g., CPUs). The memory resources 106 (also referred to as generalpurpose
memory resources) can be implemented by any combination of dynamic and/or
static memory devices (such as DRAM memory devices). The computing unit 102 can
also include data storage resources 108, such as magnetic and/or optical discs, along with
associated drive mechanisms.
[0042] Other implementations of the computing unit 102 can omit one or more of the
features described above. In addition, other implementations of the computing unit 102
can provide additional resources (e.g., "other resources" 110).
[0043] The computing unit 102 can be provided in a housing 112 having any shape. In
general, the housing 112 is configured such that the computing unit 102 can be efficiently
combined with other computing units of like design to form a group (e.g., a rack). By way
of overview, this section sets forth a first example in which the housing 112 has a wedgetype
shape, and a second example in which the housing 112 has a cube-shape. These
implementations are not exhaustive.
[0044] The computing unit 102 can include any number K of wireless communication
elements 114. For example, the wireless communication elements 114 can communicate
within the radio frequency (RF) spectrum. More specifically, the communication elements
114 can communicate within any portion of the extremely high frequency (EHF) part of
the spectrum (e.g., 30 GHz to 300 GHz). For example, without limitation, the wireless
communication elements 114 can provide communication within the 57-64 GHz portion of
the spectrum. In another case, the communication elements 114 can communicate within
an optical or infrared portion of the electromagnetic spectrum. These examples are
representative rather than exhaustive; no limitation is placed on the physical nature of the
signals emitted by the K wireless communication elements 114.
[0045] Each wireless communication element can emit a directionally focused beam of
energy. The "shape" of such a beam can be defined with respect to those points in space
at which the energy of the beam decreases to a prescribed level. For instance, note Fig. 2,
which shows an illustrative communication element 202 that functions as a transceiver,
having a transmitting module (TX) for emitting a signal and a receiving module (RX) for
receiving a signal transmitted by another communication element (e.g., by communication
element 204). The communication element 202 emits a beam 206 of electromagnetic
energy that is defined with respect to a first angle (a) which determines the lateral spread
of the beam and a second angle (, not shown) which determines the vertical spread of the
beam. The beam extends a distance L . Finally, the communication element 202 expends
an amount of power P. The values of a, , L , and P will vary for different
implementations. Without limitation, in one implementation, a and are each less than or
equal to 30 degrees, L is less than two meters, and P is less than one Watt.
[0046] Generally, the beam 206 is relatively narrow and well-defined, particularly in the
example in which communication takes place within the 57GHz-64GHz portion of the
spectrum. In this range, the beam 206 is subject to dramatic attenuation in air. The use of
a narrow beam allows a communication element to selectively communicate with one or
more other communication elements without causing interference with respect to other
communication elements. For example, the communication element 202 can successfully
interact with the communication element 204. But the beam 206 is well defined enough
such that a close-by point 208 will not receive a signal with sufficient strength to cause
interference (at the point 208).
[0047] In one implementation, each communication element provides a static beam that
points in a fixed direction and has fixed , , and L . During setup, a user can orient a
beam in a desired direction by "pointing" the computing unit housing 112 in the desired
direction. Alternatively, or in addition, the user can orient the beam in the desired
direction by adjusting the orientation of a communication element itself (relative to the
computing unit 102 as a whole).
[0048] The wireless communication element itself can include any combination of
components for transmitting and receiving signals. Without limitation, the components
can include one or more antennas, one or more lenses or other focusing devices (in the
case of optical communication), power amplifier functionality, modulation and
demodulation functionality, error correction functionality (and any type of filtering
functionality), and so on. In one case, each wireless communication element can be
implemented as a collection of components formed on a common substrate, which is
attached to (or monolithically integrated with) a motherboard associated with the
computing unit 102 itself.
[0049] Returning to the explanation of Fig. 1, the K wireless communication elements
114 are illustrated as including two sets of communication elements. A first set points in a
first direction and the other set points in the opposite direction. This is merely
representative of one option. In one particular implementation (described below with
respect to Figs. 3 and 4), the computing unit 102 includes a first single communication
element pointing in a first direction and a second single communication element pointing
in a second direction. In another particular implementation (described below with respect
to Figs. 5 and 6), the computing unit 102 includes four communication elements pointed in
four respective directions.
[0050] In certain implementations, the computing unit 102 may be a member of a group
(e.g., a rack) of computing units. And the data center as a whole may include plural such
groups. In this setting, a computing unit in a group can include at least one
communication element that is used for interacting with one or more other computing
units within the same group. This type of communication element is referred to as an
intra-group communication element. A computing unit can also include at least one
communication element that is used for interacting with one or more computing units in
one or more spatially neighboring groups. This type of communication element is referred
to as an inter-group communication element. Other computing units may include only one
or more intra-group communication elements, or one or more inter-group communication
elements. In general, each communication element can be said to communicate with one
or more other computing units; the relationship among these communication participants
will vary for different data center topologies.
[0051] The computing unit 102 may also include one or more wired communication
elements 116. The wired communication elements 116 can provide a hardwired
connection between the computing unit 102 and any entity, such as another
communication element, a routing mechanism, etc. For example, a subset of computing
units within a data center can use respective wired communication elements 116 to interact
with a network of any type, and through the network, with any remote entity. However,
the implementations shown in Figs. 4 and 6 have no wired communication elements. To
facilitate discussion, the term "communication element" will henceforth refer to a wireless
communication element, unless otherwise expressly qualified as a "wired" communication
element. Although not shown, the computing unit 102 can also include one or more omni
directional communication elements.
[0052] The computing unit 102 can also include switching resources 118. Generally, the
switching resources 118 can include any type of connection mechanism that that
dynamically connects together the various components within the computing unit 102.
For example, the switching resources 118 can control the manner in which data is routed
within the computing unit 102. At one point in time, the switching resources 118 may
route data received through a communication element to the processing resources 104 and
memory resources 106, so that this functionality can perform computation on the data. In
another case, the switching resources 118 can route output data to a desired
communication element, to be transmitted by this communication element. In another
case, the switching resources 118 can configure the computing unit 102 so that it acts
primarily as an intermediary agent that forwards data that is fed to it, and so on.
[0053] Collectively, the switching resources 118 provided by a plurality of computing
units within a data center comprise a wireless switching fabric. As will be described in
Section D, the switching fabric enables a source computing unit to transmit data to a
destination computing unit (or any other destination entity), optionally via one or more
intermediary computing units, e.g., in one or more hops. To accomplish this aim, the
switching resources 118 can also incorporate routing functionality for routing data using
any type of routing strategy or any combination of routing strategies.
[0054] Further, the computing unit 102 can use at least a portion of the memory
resources 106 as a buffer 120. The computing unit 102 uses the buffer 120 to temporarily
store data when acting in a routing mode. For example, assume that the computing unit
102 serves as an intermediary computing unit in a path that connects a source computing
unit to a destination computing unit. Further assume that the computing unit 102 cannot
immediately transfer data that it receives to a next computing unit along the path. If so,
the computing unit 102 can temporarily store the data in the buffer 120. In this case, the
computing unit 102 uses the memory resources 106 for buffering purposes in an ondemand
manner (e.g., when the buffering is needed in the course of transmitting data),
providing that the memory resource 106 are available at that particular time for use as the
buffer 120.
[0055] Hence, the memory resources 106 of the computing unit 102 serve at least two
purposes. First, the memory resources 106 work in conjunction with the processing
resources 104 to perform computation, e.g., by implementing one or more applications of
any type. Second, the memory resources 106 use the buffer 120 to temporarily store data
in a routing mode. The dual-use of the memory resources 106 is advantageous because it
eliminates or reduces the need for the data center to provide separate dedicated switching
infrastructure.
[0056] Fig. 3 shows a computing unit 302 that represents one version of the general
computing unit 102 shown in Fig. 1. The computing unit 302 includes a housing 304 that
has a wedge-like shape. The components (described above) are provided on a processing
board 306 (although not specifically shown in Fig. 3). An intra-group communication
element 308 provides wireless communication with one or more other computing units in
a local group. The intra-group communication element 308 is located on an inner surface
310. An inter-group communication element 312 provides wireless communication with
one or more other computing units in neighboring groups. The inter-group
communication element 312 is located on an outer surface 314. Section B provides
additional detail which clarifies the functions of the intra-group communication element
308 and inter-group communication element 312 within a data center having plural
groups.
[0057] Fig. 4 shows the components within the wedge-shaped computing unit 302 of
Fig. 3. The components include processing resources 402, memory resources 404, data
store resources 406, switching resources 408, the intra-group communication element 308,
and the inter-group communication element 312. This collection of components is
representative; other implementations can omit one or more of the components shown in
Fig. 4 and/or provide additional components.
[0058] Fig. 5 shows a computing unit 502 that represents another version of the general
computing unit 102 shown in Fig. 1. The computing unit 502 includes a housing 504 that
has a cube-like shape. The components (described above) are provided on a processing
board 506 (although not specifically shown in Fig. 5). This computing unit 502 includes
four communication elements (508, 510, 512, 514) for communicating with computing
units (or other entities) respectively positioned to the front, back, left, and right of the
computing unit 502. Section B provides additional detail which clarifies the functions of
the communication elements (508, 510, 512, and 514) within a data center having plural
groups.
[0059] Fig. 6 shows the components within the cube-shaped computing unit 502 of Fig.
5. The components include processing resources 602, memory resources 604, data store
resources 606, switching resources 608, and various communication elements (508, 510,
512, and 514). This collection of components is representative; other implementations can
omit one or more of the components shown in Fig. 6 and/or provide additional
components.
B. Illustrative Data Centers
[0060] Fig. 7 shows a plurality of groups of computing units. In more traditional
language, each group can be considered a rack. Consider, for example, a representative
group 702. Each computing unit (such as representative computing unit 704) in the group
702 corresponds to the wedge-shaped computing unit 302 shown in Fig. 3. A plurality of
these wedge-shaped computing units are combined together in a single layer (such as
representative layer 706) to form a ring-like shape. A plurality of these layers 708 can be
stacked to form a structure that resembles a column (e.g., a columnar structure). The
group 702 includes an inner region 710 that is defined by the collective inner surfaces of
the wedge-shaped computing units (such as the individual inner surface 310 shown in Fig.
3). The group 702 includes an outer surface defined by the collective outer surfaces of the
wedge-shaped computing units (such as the individual outer surface 314 shown in Fig. 3).
In this depiction, each column has a cylindrical shape. But the structures of other
implementations can have other respective shapes. To cite merely one alternative
example, a group can have an octagonal cross section (or any other polygonal cross
section), with or without an inner free space cavity having any contour.
[0061] Fig. 8 is cross-section view of two groups in Fig. 7, namely group 702 and group
712. With reference to group 712, the cross section reveals a collection of wedge-shaped
computing units in a particular layer, collectively providing a circular inner perimeter 802
and a circular outer perimeter 804. The inner perimeter 802 defines a free-space region
806. The cross section of the group 712 thus resembles a wheel having spokes that radiate
from a free-space hub.
[0062] Intra-group communication elements (such as representative communication
element 808) are disposed on the inner perimeter 802. Each such intra-group
communication element enables a corresponding computing unit to communicate with one
or more other computing units across the free-space region 806. For example, Fig. 8 shows
an illustrative transmitting beam 810 that extends from communication element 808 across
the free-space region 806. Intra-group communication element 812 lies "within" the path
of the beam 810, and therefore is able to receive a signal transmitted by that beam 810.
[0063] Inter-group communication elements (such as representative communication
element 814) are disposed on the outer perimeter 804. Each such inter-group
communication element enables a corresponding computing unit to communicate with one
or more other computing units in neighboring groups, such as a computing unit in group
702. For example, Fig. 8 shows an illustrative transmitting beam 816 that project from
communication element 814 (of group 712) to group 702. Intra-group communication
element 818 lies "within" the path of the beam 816, and there is able to receive a signal
transmitted by that beam 816.
[0064] The diameter of the free-space region 806 is denoted by z, while a closest
separation between any two groups is denoted by d. The distances z and d are selected to
accommodate intra-group and inter-group communication, respectively. The distances
will vary for different technical environments, but in one implementation, each of these
distances is less than two meters.
[0065] Fig. 9 shows another data center 902 that includes a plurality of groups (e.g.,
groups 904, 906, 908, etc.). Consider, for example, the representative group 904. The
group 904 includes a grid-like array of computing units, where each computing unit has
the cube-like shape shown in Fig. 5. Further, Fig. 9 shows a single layer of the group 904;
additional grid-like arrays of computing units can be stacked on top of this layer. The
group 904 may thus form multiple columns of computing units. Each column has a square
cross section (other more generally, a polygonal cross section). The group 904 as a whole
also forms a column.
[0066] The communication elements provided by each computing unit can communicate
with intra-group computing units and/or inter-group computing units, e.g., depending on
the placement of the computing unit within the group. For example, the computing unit
910 has a first wireless communication element (not shown) for interaction with a first
neighboring intra-group computing unit 912. The computing unit 910 includes a second
wireless communication element (not shown) for communicating with a second
neighboring intra-group computing unit 914. The computing unit 910 includes a third
wireless communication element (not shown) for communicating with a computing unit
916 of the neighboring group 906. This organization of computing units and groups is
merely representative; other data centers can adopt other layouts.
[0067] Also note that the computing unit 910 includes a hardwired communication
element (not shown) for interacting with a routing mechanism 918. More specifically, the
computing unit 910 is a member of a subset of computing units which are connected to the
routing mechanism 918. The routing mechanism 918 connects computing units within the
data center 902 to external entities. For example, the data center 902 may be coupled to an
external network 920 (such as the Internet) via the routing mechanism 918. Users and
other entities may interact with the data center 902 using the external network 920, e.g., by
submitting requests to the data center 902 via the external network 920 and receiving
responses from the data center 902 via the external network 920.
[0068] The data center 902 shown in Fig. 9 thus includes some hardwired
communication links. However, the data center 902 will not present the same type of
bottleneck concerns as a traditional data center. This is because a traditional data center
routes communication to and from a rack via a single access point. In contrast, the group
904 includes plural access points that connect the routing mechanism 918 to the group
904. For example, the group 904 shows three access points that connect to the routing
mechanism 918. Assume that the group 904 includes five layers (not shown); hence, the
group will include 3 x 5 access points, forming a wall of input-output access points.
Computing units that are not directly wired to the routing mechanism 918 can indirectly
interact with the routing mechanism 918 via one or more wireless hops. Hence, the
architecture shown in Fig. 9 reduces the quantity of data that is funneled through any
individual access point.
[0069] Fig. 9 illustrates the routing mechanism 918 in the context of a grid-like array of
computing units. But the same principles can be applied to a data center having groups of
any shape. For example, consider again the use of cylindrical groups, as shown in Fig. 7.
Assume that a data center arranges these cylindrical groups in plural rows. The data
center can connect a routing mechanism to at least a subset of computing units in an outer
row of the data center. That routing mechanism couples the data center with external
entities in the manner described above.
C. Illustrative Functionality for Addressing Permanent Interference
[0070] Fig. 10 portrays the concept of permanent interference that may affect any two
communication elements (1002, 1004). Assume that the communication element 1004 is
able to successfully receive a signal transmitted by the communication element 1002. But
assume that the communication element 1002 cannot similarly receive a signal transmitted
by the communication element 1004. Informally stated, the communication element 1002
can talk to the communication element 1004, but the communication element 1004 cannot
talk back to the communication element 1002. This phenomenon is referred to as
permanent interference; it is permanent insofar as it ensues from the placement and
orientation of the communication elements (1002, 1004) in conjunction with the shapes of
the beams emitted by the communication elements (1002, 1004). Permanent interface is
undesirable because it reduces the interaction between two computer units to one-way
communication (compared to two-way communication). One-way communication cannot
be used to carry out many communication tasks - at least not efficiently.
[0071] One way to address the issue of permanent interference is to provide an indirect
route whereby the communication element 1004 can transmit data to the communication
element 1002. For instance, that indirect route can involve sending the data through one
or more intermediary computing units (not shown). However, this option is not fully
satisfactory because it increases the complexity of the routing mechanism used by the data
center.
[0072] Fig. 11 illustrates another mechanism by which a data center may avoid
permanent interference. In this approach, a user builds a group (e.g., a rack) of computing
units by adding the computing units to a housing structure one-by-one. Upon adding each
computing unit, a user can determine whether that placement produces permanent
interface. If permanent interference occurs, the user can place the computing unit in
another location. For example, as depicted, the user is currently attempting to add a
wedge-shaped computing unit 1102 to an open slot 1104 in a cylindrical group 1106. If
the user determines that permanent interference will occur as a result of this placement, he
or she will decline to make this placement and explore the possibility of inserting the
computing unit 1102 in another slot (not shown).
[0073] Various mechanisms can assist the user in determining whether the placement of
the computing unit 1102 will produce permanent interface. In one approach, the
computing unit 1102 itself can include a detection mechanism (not shown) that determines
whether the interference phenomenon shown in Fig. 10 is produced upon adding the
computing unit 1102 to the group 1106. For instance, the detection mechanism can
instruct the computing unit 1102 to transmit a test signal to nearby computing units; the
detection mechanism can then determine whether the computing unit 1102 fails to receive
acknowledgement signals from these nearby computing units (in those circumstances in
which the nearby computing units have received the test signal). The detection
mechanism can also determine whether the complementary problem exists, e.g., whether
the computing unit 1102 can receive a test signal from a nearby computing unit but it
cannot successfully forward an acknowledgement signal to the nearby computing unit.
The detection mechanism can also detect whether the introduction of the computing unit
1102 causes permanent interference among two or more already-placed computing units
in the group 1106 (even though the permanent interference may not directly affect the
computing unit 1102). Already -placed computing units can include their own respective
detection mechanisms that can assess interference from their own respective
"perspectives."
[0074] The computing unit 1102 can include an alarm mechanism 1108 that alerts the
user to problems with permanent interference (e.g., by providing an audio and/or visual
alert). Already-placed computing units can include a similar alarm mechanism.
Alternatively, or in addition, the housing of the group 1106 may include a detection
mechanism (not shown) and an associated alarm mechanism 1110 for alerting the user to
problems with permanent interference. More specifically, the housing of the group 1106
can include a plurality of such detection mechanisms and alarm mechanisms associated
with respective computing units within the group 1106. The alarms identify the
computing units that are affected by the proposed placement.
[0075] Fig. 12 shows a procedure 1200 which summarizes the concepts set forth above
in flowchart form. In block 1202, a user places an initial computing unit at an initial
location within a housing associated with a group (e.g., a rack). In block 1204, the user
places a new computing unit at a candidate location within the housing. In block 1206, the
user determines whether this placement (in block 1204) creates permanent interference (in
any of the ways described above). If not, in block 1208, the user commits the new
computing unit to the candidate location (meaning simply that the user leaves the
computing unit at that location). If permanent interference is created, in block 1210, the
user moves the computing unit to a new candidate location, and repeats the checking
operation in block 1206. This procedure can be repeated one or more times until the user
identifies an interference-free location for the new computing unit.
[0076] In block 1212, the user determines whether there are any new computing units to
place in the housing associated with the group. If so, the user repeats the above-described
operations with respect to a new computing unit. In block 1214, the user determines what
is to be done regarding empty slots (if any) within the group. These empty slots lack
computing units because of the presence of permanent interference. In one case, the user
can leave these slots empty. In another case, the user can populate these slots with any
type of computing unit that does not involve wireless communication. For example, the
user can allocate the empty slots for computing units which perform a dedicated data
storage role.
[0077] The procedure 1200 can be varied in different ways. For example, the user can
address an interference situation by changing the location of one or more previously
placed computing units (instead of the newly introduced computing unit). For example,
the user may determine that a prior placement of a computing unit disproportionally
constrains the placement of subsequent computing units. In this case, the user can remove
this previous computing unit to enable the more efficient placement of subsequent
computing units.
[0078] As generally indicated in block 1216, at any point in the set-up of the data center
(or following the set-up of the data center), the interaction capabilities of each computing
unit can be assessed, e.g., by determining the group of communication units (if any) with
which each computing unit can interact without permanent interference. Topology
information regarding the interconnection of nodes (computing units) in the data can be
derived by aggregating these interaction capabilities.
D. Illustrative Signaling Among Computing Units
[0079] Any type of media access control strategy can be used to transfer data among
computing units. For instance, the data centers described above can use any one of time
division multiple access (TDMA), frequency division multiple access (FDMA), code
division multiple access (CDMA), etc., or any combination thereof. For example, Fig. 13
shows an example which combines time-division and frequency-division techniques to
define a collection of time-vs.-frequency slots for conducting communication among
computing units. Guard region separate the slots in both the frequency dimension and the
time dimension. These guard regions act as buffers to reduce the risk of interference
among the slots.
[0080] In one approach, a data center uses the slotted technique shown in Fig. 13 to
transfer control data among the computing units. More specifically, the data center can
assign slots for transferring control data between respective pairs of computing units.
Hence, suppose that a first computing unit wishes to interact with a second computing unit
in its vicinity. The first computing unit waits until an appropriate slot becomes available
(where that slot is dedicated to the transfer of control data between the first computing unit
and the second computing unit). The first computing unit then uses the assigned control
slot to transfer the control data to the second computing unit. The second computing unit
reads the control data and takes action based thereon. In one case, the first computing unit
may send the control data as a prelude to sending payload data to the second control unit.
The second computing unit can respond by providing an acknowledge signal (in the
manner to be described below).
[0081] A data center can use any technique to transfer the actual payload data. In one
approach, the data center uses the same time-vs.-frequency multiplexing approach
described above (for the case of control data) to transfer payload data. In a second
approach, the data center performs no multiplexing in sending payload data. That is, in
the second approach, once a first computing unit receives permission to send payload data,
it can use that data channel to send all of its data. Once the first computing unit has
finished sending its payload data, it can free up the data channel for use by another
computing it.
[0082] Fig. 14 illustrates the latter scenario described above. In this scenario, the data
center uses intermittent control blocks (e.g., blocks 1402, 1404) to handle the exchange of
control data among computing units. Each control block has the slot structure shown in
Fig. 13. The data center uses a non-multiplexed data channel 1406 to handle the exchange
of payload data. To repeat, however, Figs. 13 and 14 show one media access control
strategy among many possible access control strategies.
[0083] Generally, a data center can allocate a certain amount of communication
resources for handling control signaling and a certain amount of communication resources
for handling the transfer of payload data. There is an environment-specific tradeoff to
consider in selecting a particular ratio of control-related resources to payload-related
resources. Increasing the control signaling reduces the latency at which computing units
can acquire control slots; but this decreases the amount of resources that are available to
handle the transfer of data. A designer can select a ratio to provide a target latency-related
and capacity-related performance.
[0084] Figs. 15-19 next show an illustrative signaling protocol among computing units.
That illustrative protocol describes the manner in which a computing unit may establish a
connection with one or more other computing units in order to exchange payload data with
those other computing units. The request by the computing unit may or may not conflict
with pre-existing connections among computing units within the data center. Hence, the
illustrative protocol describes one way (among other possible ways) that the data center
can resolve potential conflicts.
[0085] Figs. 15-19 also address different types of hidden terminal scenarios. In a hidden
terminal scenario, a first computing unit and a second computing unit may be in
communication with a third computing unit. However, the first and second computing
units may not have direct knowledge of each other; that is, the first computing unit may
not know of the second computing unit and the second computing unit may not know of
the first computing unit. This may create undesirable interference as the first and second
computing units place conflicting demands on the third computing unit. This same
phenomenon can be exhibited on a larger scale with respect to larger numbers of
computing units.
[0086] To begin with, Fig. 15 is used as a vehicle to set forth terminology that will be
used to describe a number of signaling scenarios. That figure shows six illustrative
participant computing units, i.e., P0, PI, P2, P3, P4, and P5. If any participant computing
unit X is receiving data from any participant unit Y, X is said to be "occupied" by Y. If
any participant computing unit X is not receiving data from any participant computing unit
Y, but is nonetheless under the influence of a data signal from the participant computing
unit Y, then participant computing unit X is said to be "covered" by participant computing
unit Y. In the case of Fig. 15, participant computing unit P4 is occupied by participant
computing unit PI . Participant computing units P3 and P5 are each covered by participant
computing unit PI . The computing units will be referred to as simply P0-P5 to simplify
explanation below.
[0087] Fig. 16 shows a signaling scenario is which no conflict occurs. At instance A, P0
sends control data that conveys a request to connect to P3. At instance B, both P3 and P4
acknowledge the request of P0. At this point, P3 becomes occupied by P0 and P4
becomes covered by P0. At instance C, P0 sends control data that indicates that it is
disconnecting. P3 and P4 will receive this control data, which will remove their occupied
and covered statuses, respectively, with respect to PO.
[0088] Fig. 1 shows a signaling scenario in which signal overlap occurs, but there is
otherwise no conflict. Prior to instance A, assume that POhas established a connection
with P3; as a result, P3 is occupied by PO and P4 is covered by PO. Next, P2 sends control
data that conveys a request to connect to P5. At instance B, both P4 and P5 acknowledge
connection to P5. At instance C, as a result, P5 becomes occupied by P2, and P4 becomes
covered by both PO and P2.
[0089] Fig. 18 shows a signaling scenario in which an occupied-type conflict occurs.
Prior to instance A, assume that P0 has established a connection with P4; as a result, P4 is
occupied by P0, and P3 is covered by P0. Next, P2 sends control data that conveys a
request to connect to P5. At instance B, P5 acknowledges the connection to P5 request.
At instance C, P4 acknowledges the request sent by P2. P0 receives this signal and
recognizes that it has been preempted by another computing unit. It therefore sends a
disconnection message, which is received by P3 and P4. At instance D, as a result, P3 is
neither occupied nor covered by any participant computing unit, P4 is covered by P2, and
P5 is occupied by P2.
[0090] Fig. 19 shows a signaling scenario in which a covered-type conflict occurs. Prior
to instance A, assume that P0 has established a connection with P3; as a result, P3 is
occupied by P0 and P4 is covered by P0. Next, P2 sends control data that conveys a
request to connect to P4. At instance B, P5 acknowledges the connection to P4 request.
At instance C, P4 also acknowledges the request sent by P2. P0 receives this signal and
recognizes that it has been preempted by another computing unit. It therefore sends a
disconnection message, which is received by P3 and P4. At instance D, as a result, P3 is
neither occupied nor covered by any participant computing unit, P4 is occupied by P2, and
P5 is covered by P2.
E. Illustrative Routing Functionality
[0091] In summary, a data center contains plural groups (e.g., racks). Each rack, in turn,
includes plural computing units. In one case, the data center uses wireless communication
to couple the racks together, e.g., to perform inter-group communication. Moreover, the
data center uses wireless communication to couple individual computing units within a
group together, e.g., to perform intra-group communication.
[0092] A data center may utilize the above-described connections to transfer data from a
source computing unit in a first group to a destination computing unit in a second group
over a communication path that includes plural segments or hops. One or more segments
may occur with a particular group; one or more other segments may occur between two
different groups. Further, the path may pass through one or more intermediary groups.
[0093] For instance, note the example of Fig. 20. Here a computing unit in group A
sends data to a first computing unit in group B. The first computing unit in group B sends
the data to a second computing unit in group B, which, in turn, then sends the data to a
third computing unit in group B. The third computing unit in group B then sends the data
to some other computing unit in some other group, and so on.
[0094] The switching resources of each individual computing unit collectively form a
switching fabric within the data center. That switching fabric includes routing
functionality for accomplishing the type of transfer described above. Fig. 2 1 provides a
high-level depiction of this concept. Namely, Fig. 2 1 shows a data center 2102 that
includes a plurality of groups of computing units. The switching resources of each
computing unit collectively provide a switching fabric 2104.
[0095] In general, the switching fabric 2104 can form a graph that represents the
possible connections within a data center. The distributed nodes in the graph represent
computing units; the edges represent connections among the computing units. The
switching fabric 2104 can form this graph by determining what duplex communication
links can be established by each computing unit. More specifically, the switching fabric
2104 can distinguish between links that perform intra-group routing and links that perform
inter-group routing. Further, the switching fabric 2104 can also identify one-way links to
be avoided (because they are associated with permanent interference).
[0096] The switching fabric 2104 can form this graph in a distributed manner (in which
each node collects connectivity information regarding other nodes in the switching fabric
2104), and/or a centralized manner (in which one or more agents monitors the connections
in the switching fabric 2104). In one case, each node may have knowledge of just its
neighbors. In another case, each node may have knowledge of the connectivity within
switching fabric 2104 as a whole. More specifically, the nodes may maintain routing
tables that convey connectivity information, e.g., using any algorithm or combination
thereof (e.g., distance or path vector protocol algorithms, link-state vector algorithms, etc.)
[0097] The switching fabric 2104 can implement the routing using any type of general
routing strategy or any combination of routing strategies. Generally, for instance, the
switching fabric 2104 can draw from any one or more of the following routing strategies:
unicast, in which a first computing unit sends data to only a second computing unit;
broadcast, in which a computing unit sends data to all other computing units in the data
center; multicast, in which a computing unit sends data to a subset of computing units; and
anycast, in which a computing unit sends data to any computing unit that is selected from
a set of computing units (e.g., based on random-selection considerations, etc.), and so on.
[0098] More specifically, the switching fabric 2104 can use any combination of static or
dynamic considerations in routing messages within the data center 2102. The switching
fabric 2104 can use any metric or combination of metrics in selecting paths. Further, the
switching fabric 2104 can use, without limitation, any algorithm or combination of
algorithms in routing messages, including algorithms based on shortest path considerations
(e.g., based on Dijkstra's algorithm), heuristic considerations, policy-based considerations,
fuzzy logic considerations, hierarchical routing consideration, geographic routing
considerations, dynamic learning considerations, quality of service considerations, and so
on. For example, in the scenario shown in Fig. 20, the switching fabric 2104 can use a
combination of random path selection and shortest path analysis to route data through the
switching fabric 2104.
[0099] In addition, the switching fabric 2104 can adopt any number the following
features to facilitate routing.
[00100] Cut-through switching. The switching fabric 2104 can employ cut-through
switching. In this approach, any participant (e.g., node) within the switching fabric 2104
begins transmitting a message before it has received the complete message.
[00101] Deadlock and livelockprevention (or reduction). The switching fabric 2104 can
use various mechanisms to reduce or eliminate the occurrence of deadlock and livelock.
In these circumstances, a message becomes hung up because it enters an infinite loop or
because it encounters any type of inefficiency in the switching fabric 2104. The switching
fabric 2104 can address this situation by using any type of time-out mechanism (which
sets a maximum amount of time for transmitting a message), and/or a hop limit
mechanism (which sets a maximum amount of hops that a message can take in advancing
from a source node to a destination node), and so forth. Upon encountering such a time
out or hop limit, the switching fabric 2104 can resend the message.
[00102] Fig. 22 shows another provision that can be adopted to reduce the risk of
deadlock and the like. In this case, a data center assigns a first subset of communication
elements for handling communication in a first direction and a second subset of elements
for handling communication in a second direction. For example, Fig. 22 shows a portion
of an inner surface 2202 of a cylindrical group. A first subset of communication elements
(such as communication element 2204) is assigned to forward data in an upward direction,
and a second subset of communication elements (such as communication element 2206) is
assigned forward data in a downward direction. A data center can assign roles to different
communication elements in any way, such as by interleaving elements having different
roles based on any type of regular pattern (such as a checkerboard pattern, etc.). Or the
data center can assign roles to different communication elements using a random
assignment technique, and so on. In advancing in a particular direction, the switching
fabric 2104 can, at each step, select from among nodes having the appropriate routing
direction (e.g., by making a random selection among the nodes). Generally, this provision
reduces the possibility that an infinite loop will be established in advancing a message
from a source node to a destination node.
[00103] Failsafe mechanisms. The wireless architecture of the data center 2102 is wellsuited
for handling failures. A first type of failure may occur within one or more
individual computing units within a group. A second type of failure may affect an entire
group (e.g., rack) within the data center 2102. Failure may represent any condition which
renders functionality completely inoperable, or which causes the functionality to exhibit
suboptimal performance. The switching fabric 2104 can address these situations by
routing a message "around" failing components. For example, in Fig. 23, assume that
group 2302 and group 2304 have having failed within a data center. In the absence of this
failure, the switching fabric 2104 may have routed a message along a path defined by A,
B, and C. Upon occurrence of the failure, the switching fabric 2104 may route the
message along a more circuitous route (such as the path defined by V,W, X, Y, and Z), to
thereby avoid the failed groups (2302, 2304). Any routing protocol can be used to achieve
this failsafe behavior.
[00104] In closing, the description may have described various concepts in the context of
illustrative challenges or problems. This manner of explication does not constitute an
admission that others have appreciated and/or articulated the challenges or problems in the
manner specified herein.
[00105] Further, the subject matter has been described in language specific to structural
features and/or methodological acts, it is to be understood that the subject matter defined
in the appended claims is not necessarily limited to the specific features or acts described
above. Rather, the specific features and acts described above are disclosed as example
forms of implementing the claims.
CLAIMS
1. A data center, comprising:
at least three computing units, each computing unit comprising:
processing resources for performing a computing function;
memory resources for storing data;
at least two wireless communication elements, each for communicating with
at least one other computing unit using wireless communication, and each forming
a directionally-focused beam; and
switching resources for coupling together the processing resources, memory
resources, and said at least two wireless communication elements.
2. The data center of claim 1, wherein said at least three computing units
comprise at least two groups of computing units, and wherein each computing unit
includes at least one intra-group wireless communication element for
communicating with at least one other computing unit in a local group, and at least
one inter-group communication element for communicating with at least one other
computing unit in at least one neighboring group.
3. The data center of claim 1, wherein said at least three computing units
comprises a group of computing units that form a columnar structure, the columnar
structure having an inner free-space region for accommodating intra-group
communication among computing units within the group.
4. The data center of claim 1, wherein at least a subset of computing units each
includes at least one wired communication element for communicating with an
external entity.
5. The data center of claim 1, wherein said at least three computing units are
placed with respect to each other to avoid permanent interference, wherein
permanent interference exists when a first computing unit can communicate with a
second computing unit, but the second computing unit cannot directly communicate
with the first computing unit.
6. The data center of claim 1, wherein said at least three computing units form
a switching fabric for transmitting payload data from a source computing unit to a
destination computing unit via at least one intermediary computing unit.
7. The data center of claim 6, wherein at least one computing unit involved in
transmission of the payload data is configured to use at least part of its memory
resources, on demand and if available, as a buffer for temporarily storing the
payload data being transmitted by the switching fabric.
8. The data center of claim 6, wherein the switching fabric is configured to use
a routing strategy that routes a message to avoid suboptimal-performing computing
units in the data center.
9. The data center of claim 6, wherein the switching fabric is configured to use
a first subset of computing units for transmitting payload data in a first direction
and a second subset of computing units for transmitting payload data in a second
direction.
10. The data center of claim 1, wherein said at least three computing units are
configured to communicate with each other via wireless communication using a
media access protocol that addresses a hidden terminal phenomenon.
11. The data center of claim 1, wherein said at least three computing units are
configured to communicate with each other by transmitting control data and
payload data, a ratio of control data to payload data being selected to provide a
target latency-related and capacity-related performance.
12. The data center of claim 11, wherein said at least three computing units are
configured to communicate the control data using a plurality of slots defined with
respect to frequency and time.
13. A method for placing computing units in a data center, comprising:
placing a new computing unit in the data center at a candidate location,
relative to one or more other previously-placed computing units, each computing
unit having at least one communication element for communicating with at least
one other computing unit using wireless communication, said at least one
communication element forming a directionally-focused beam;
determining whether placement of the new computing unit at the candidate
location creates permanent interference in the data center, wherein permanent
interference exists when a first computing unit can communicate with a second
computing unit, but the second computing unit cannot directly communicate with
the first computing unit;
committing the new computing unit to the candidate location if there is no
permanent interface; and
changing a location of at least one computing unit if there is permanent
interference, followed by repeating said determining.
14. The method of claim 13, wherein said determining comprises providing an
alert if permanent interference is detected.
15. The method of claim 13, further comprising forming topological information
regarding interconnection of computing units within the data center, based on
assessed interaction capabilities of each computing unit.
| # | Name | Date |
|---|---|---|
| 1 | 7812-CHENP-2012 DESCRIPTION (COMPLETE) 10-09-2012.pdf | 2012-09-10 |
| 1 | 7812-CHENP-2012-IntimationOfGrant04-01-2023.pdf | 2023-01-04 |
| 2 | 7812-CHENP-2012 CLAIMS 10-09-2012.pdf | 2012-09-10 |
| 2 | 7812-CHENP-2012-PatentCertificate04-01-2023.pdf | 2023-01-04 |
| 3 | Correspondence by Agent _Power of Attorney_27-09-2019.pdf | 2019-09-27 |
| 3 | 7812-CHENP-2012 FORM-5 10-09-2012.pdf | 2012-09-10 |
| 4 | 7812-CHENP-2012-ABSTRACT [19-09-2019(online)].pdf | 2019-09-19 |
| 4 | 7812-CHENP-2012 FORM-3 10-09-2012.pdf | 2012-09-10 |
| 5 | 7812-CHENP-2012-CLAIMS [19-09-2019(online)].pdf | 2019-09-19 |
| 5 | 7812-CHENP-2012 FORM-2 FIRST PAGE 10-09-2012.pdf | 2012-09-10 |
| 6 | 7812-CHENP-2012-COMPLETE SPECIFICATION [19-09-2019(online)].pdf | 2019-09-19 |
| 6 | 7812-CHENP-2012 FORM-1 10-09-2012.pdf | 2012-09-10 |
| 7 | 7812-CHENP-2012-DRAWING [19-09-2019(online)].pdf | 2019-09-19 |
| 7 | 7812-CHENP-2012 DRAWINGS 10-09-2012.pdf | 2012-09-10 |
| 8 | 7812-CHENP-2012-FER_SER_REPLY [19-09-2019(online)].pdf | 2019-09-19 |
| 8 | 7812-CHENP-2012 CORRESPONDENCE OTHERS 10-09-2012.pdf | 2012-09-10 |
| 9 | 7812-CHENP-2012 POWER OF ATTORNEY 10-09-2012.pdf | 2012-09-10 |
| 9 | 7812-CHENP-2012-OTHERS [19-09-2019(online)].pdf | 2019-09-19 |
| 10 | 7812-CHENP-2012 PCT PUBLICATION 10-09-2012.pdf | 2012-09-10 |
| 10 | 7812-CHENP-2012-FORM 3 [18-09-2019(online)].pdf | 2019-09-18 |
| 11 | 7812-CHENP-2012 CLAIMS SIGNATURE LAST PAGE 10-09-2012.pdf | 2012-09-10 |
| 11 | 7812-CHENP-2012-Information under section 8(2) (MANDATORY) [18-09-2019(online)].pdf | 2019-09-18 |
| 12 | 7812-CHENP-2012-PETITION UNDER RULE 137 [18-09-2019(online)].pdf | 2019-09-18 |
| 12 | 7812-CHENP-2012.pdf | 2012-09-27 |
| 13 | 7812-CHENP-2012 FORM-3 28-02-2013.pdf | 2013-02-28 |
| 13 | 7812-CHENP-2012-FER.pdf | 2019-03-19 |
| 14 | 7812-CHENP-2012 CORRESPONDENCE OTHERS 28-02-2013.pdf | 2013-02-28 |
| 14 | 7812-CHENP-2012-FORM 3 [17-11-2017(online)].pdf | 2017-11-17 |
| 15 | abstrac7812-CHENP-2012.jpg | 2013-12-20 |
| 15 | FORM-6-1701-1800(KONPAL).86.pdf | 2015-03-13 |
| 16 | Form-18(Online).pdf | 2014-03-17 |
| 16 | MS to MTL Assignment.pdf | 2015-03-13 |
| 17 | MTL-GPOA - KONPAL.pdf | 2015-03-13 |
| 17 | 7812-CHENP-2012 FORM-6 25-02-2015.pdf | 2015-02-25 |
| 18 | FORM-6-1701-1800(KONPAL).86.pdf ONLINE | 2015-03-03 |
| 18 | MTL-GPOA - KONPAL.pdf ONLINE | 2015-03-03 |
| 19 | MS to MTL Assignment.pdf ONLINE | 2015-03-03 |
| 20 | FORM-6-1701-1800(KONPAL).86.pdf ONLINE | 2015-03-03 |
| 20 | MTL-GPOA - KONPAL.pdf ONLINE | 2015-03-03 |
| 21 | 7812-CHENP-2012 FORM-6 25-02-2015.pdf | 2015-02-25 |
| 21 | MTL-GPOA - KONPAL.pdf | 2015-03-13 |
| 22 | Form-18(Online).pdf | 2014-03-17 |
| 22 | MS to MTL Assignment.pdf | 2015-03-13 |
| 23 | abstrac7812-CHENP-2012.jpg | 2013-12-20 |
| 23 | FORM-6-1701-1800(KONPAL).86.pdf | 2015-03-13 |
| 24 | 7812-CHENP-2012-FORM 3 [17-11-2017(online)].pdf | 2017-11-17 |
| 24 | 7812-CHENP-2012 CORRESPONDENCE OTHERS 28-02-2013.pdf | 2013-02-28 |
| 25 | 7812-CHENP-2012-FER.pdf | 2019-03-19 |
| 25 | 7812-CHENP-2012 FORM-3 28-02-2013.pdf | 2013-02-28 |
| 26 | 7812-CHENP-2012-PETITION UNDER RULE 137 [18-09-2019(online)].pdf | 2019-09-18 |
| 26 | 7812-CHENP-2012.pdf | 2012-09-27 |
| 27 | 7812-CHENP-2012 CLAIMS SIGNATURE LAST PAGE 10-09-2012.pdf | 2012-09-10 |
| 27 | 7812-CHENP-2012-Information under section 8(2) (MANDATORY) [18-09-2019(online)].pdf | 2019-09-18 |
| 28 | 7812-CHENP-2012 PCT PUBLICATION 10-09-2012.pdf | 2012-09-10 |
| 28 | 7812-CHENP-2012-FORM 3 [18-09-2019(online)].pdf | 2019-09-18 |
| 29 | 7812-CHENP-2012 POWER OF ATTORNEY 10-09-2012.pdf | 2012-09-10 |
| 29 | 7812-CHENP-2012-OTHERS [19-09-2019(online)].pdf | 2019-09-19 |
| 30 | 7812-CHENP-2012 CORRESPONDENCE OTHERS 10-09-2012.pdf | 2012-09-10 |
| 30 | 7812-CHENP-2012-FER_SER_REPLY [19-09-2019(online)].pdf | 2019-09-19 |
| 31 | 7812-CHENP-2012-DRAWING [19-09-2019(online)].pdf | 2019-09-19 |
| 31 | 7812-CHENP-2012 DRAWINGS 10-09-2012.pdf | 2012-09-10 |
| 32 | 7812-CHENP-2012-COMPLETE SPECIFICATION [19-09-2019(online)].pdf | 2019-09-19 |
| 32 | 7812-CHENP-2012 FORM-1 10-09-2012.pdf | 2012-09-10 |
| 33 | 7812-CHENP-2012-CLAIMS [19-09-2019(online)].pdf | 2019-09-19 |
| 33 | 7812-CHENP-2012 FORM-2 FIRST PAGE 10-09-2012.pdf | 2012-09-10 |
| 34 | 7812-CHENP-2012-ABSTRACT [19-09-2019(online)].pdf | 2019-09-19 |
| 34 | 7812-CHENP-2012 FORM-3 10-09-2012.pdf | 2012-09-10 |
| 35 | Correspondence by Agent _Power of Attorney_27-09-2019.pdf | 2019-09-27 |
| 35 | 7812-CHENP-2012 FORM-5 10-09-2012.pdf | 2012-09-10 |
| 36 | 7812-CHENP-2012-PatentCertificate04-01-2023.pdf | 2023-01-04 |
| 36 | 7812-CHENP-2012 CLAIMS 10-09-2012.pdf | 2012-09-10 |
| 37 | 7812-CHENP-2012 DESCRIPTION (COMPLETE) 10-09-2012.pdf | 2012-09-10 |
| 37 | 7812-CHENP-2012-IntimationOfGrant04-01-2023.pdf | 2023-01-04 |
| 38 | 7812-CHENP-2012-FORM-27 [10-09-2025(online)].pdf | 2025-09-10 |
| 1 | 7812_CHENP_2012_Search_Strategy_07-03-2019.pdf |