Sign In to Follow Application
View All Documents & Correspondence

Device And Method For Data Load Balancing

Abstract: The present invention is related to a method for determining a data traffic load distribution over a network in an IP Multimedia Subsystem IMS whereby said network comprises a client source node and at least two server destination nodes. The at least two server destination nodes have a different capacity for handling data transmission requests. The method comprises the steps of  said client source node sending a request for data transmission over the network to the at least two server destination nodes according to a client server protocol  each of the at least two server destination nodes transmitting in response to said request an indication on the status of its actual capacity  said client source node exploiting the status indications for deciding on load distribution of the data traffic over the server destination nodes.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
03 January 2013
Publication Number
28/2014
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
patent@depenning.com
Parent Application

Applicants

ALCATEL LUCENT
3 avenue Octave Gréard F 75007 Paris

Inventors

1. TSYEN Luc
Heide 3 B 2230 Herselt
2. VERBANDT Hugo
Alkstraat 36 B 2170 Merksem

Specification

DEVICE AND METHOD FOR DATA LOAD BALANCING
Field of the Invention
[0001] The present invention generally relates to data traffic load balancing in
a distributed network system. More in particular, it relates to a method and device for
performing such load balancing.
Background of the Invention
[0002] The application of load balancing (also referred to as load sharing)
algorithms in distributed network systems is commonly known in the art. Such a
distributed network can for example be a communication network over which an IP
Multimedia Subsystem (IMS) provides IP multimedia services to fixed and/or mobile
terminals. A terminal registers with the IMS, after which the terminal is able to
request multimedia services. The multimedia services are provided by application
servers. The IMS registers a terminal with an application server, which provides a
requested service to the terminal. Multiple application servers may provide services
to the same terminal at the same moment.
[0003] Such an IMS typically contains a number of nodes with various
functions. A first set of nodes are related to routing of communication through the
network. Examples of such nodes are the various Call Session Control Functions
(CSCF), which can act as a proxy towards the network, perform authentication and
authorization checks and ensure that messages of a particular user arrive at the
appropriate nodes in the network or vice versa. A second type of nodes concerns the
already mentioned Application Servers (AS), which provide certain services to the
users of the IMS network. Such services can range from presence and location to
more specific services such as Push to Talk over cellular networks services. A third
type of nodes are nodes related to user management and storage of user related
information such as account information, authorization information, the services with
which a particular user is associated, network options enabled or disabled for a
particular user, etc. A typical example of the third type of nodes is the Home
Subscriber Server (HSS) which acts as a large database for user profiles and other
user related information in an IMS network. In the Home Subscriber Server (HSS) the
identification provided by the terminal which is being registered, is looked up. The
HSS confirms the validity of the terminal identification.
[0004] Nodes in the IMS network such as an AS or CSCF can communicate
with the HSS and retrieve information concerning a specific user from the HSS or can
supply information to the HSS which is stored in the HSS or which is used to update
stored information concerning that specific user. The TISPAN/3GPP specifications of
a HSS provide the possibility of storing application specific information for users in
the HSS, in addition to the user related information which is used for authentication,
authorization, billing, etc.
[0005] Load balancing algorithms are applied to optimize use of resources by
distributing traffic over multiple paths for transferring data from a client source node
to a server destination node. Load balancing decisions are typically made on the
outbound interface of a client or proxy node and so load balancing must be
configured on the outbound interface.
[0006] Classical load sharing algorithms (round robin, sequential, cyclic,
random, etc..) are working well in configurations where all receptors have equal
capacity, i.e. in cases where the load has to be spread evenly over a number of
destination nodes. Collection capacity (i.e. server capacity) refers to the degree that
resources are available to receive and process new incoming requests from client
nodes. Resources correspond to computing power, memory size, storage etc...
[0007] Classical round robin, as illustrated in Fig. , distributes the load evenly
over the different collectors (servers) of the client-server configuration. It does not
take the collection capacity of the individual server nodes into account. As shown in
Fig. , this leads to a different load on individual node level in case the nodes have a
different collection capacity. The line on the figure represents the equal load sharing
over the configuration. The various relative load percentages for the four nodes in the
example of Fig. 1 are indicated.
[0008] However, in the situation illustrated in Fig.2 where a configuration
contains server nodes of different collection capacity, classical Round Robin
selection causes temporary overload conditions in the nodes with less collection and
processing capacity when the equal load sharing exceeds a certain level. The
servers then return Boolean responses (meaning e.g. "no space") indicating that the
response cannot be processed. The sender (client node) must then select a new
collector (i.e. server node) and retransmit the request. This obviously leads to an
increase of the overall traffic over the network and thus to an inefficient usage of
resources.
[0009] Hence, the existing algorithms however have serious limitations in
certain situations and environments.
When a configuration grows, it would ideally be extended with nodes that have the
best cost/benefit ratio at the time of the extension. In many or even most cases that
would correspond with an improved, more performing, high capacity platform and/or
a product of a different vendor with different performance and a product with different
capacity characteristics. Hence, it is likely that such a multi-node configuration does
not remain homogeneous. Such an extended configuration with heterogeneous
nodes cannot be dealt with appropriately with prior art solutions as discussed above.
[0010] Also, when the capacity of some nodes of a configuration is shared by
different functions (or applications) a performance reduction can occur. The existing
selection algorithms do not take such transient conditions into account. Often batch
processes are scheduled at low traffic time to avoid that the minimal server capacity
remains available for collection.
[001 1] In classical load sharing on mixed (i.e. different capacity) configurations
where load must be distributed taking into account the capacity of the server nodes,
every sender needs to be pre-configured with selection rules (e.g. a population of
load sharing rules) Such population would typically be static. When a specific node
experiences additional load due to other processes, a batch job etc., then transient
overload conditions can occur that cannot be solved with a static load distribution.
[0012] Most systems rely on a (Boolean) indication (meaning "ok" or "not ok")
in responses to determine next selection. These back-off responses obviously cause
additional load. The overhead traffic also increases with the load level (as there are
more negative responses). The Boolean on/off responses may introduce oscillatory
behaviour, possibly leading to instabilities and to a collapse instead of a graceful
degradation. Further, the available capacity is inefficiently used, as low-capacity
server nodes receive relatively more traffic.
[0013] An example of static load distribution rules is found in EP21 391 95
where a solution for registering a terminal with an application server is proposed. A
static set of predefined rules is provided which e.g. can be used for obtaining load
balancing among a group of servers.
[0014] A typical example where the above-sketched problems may occur,
concerns the 3GPP Rf interface, where multiple data generators transmit records to
multiple data collectors. Another example is the 3GPP Cx interface, when multiple
FEPs (Front End Processes) are used to access the HSS (Home Subscriber Server).
Other interfaces, protocols, environments may be possible, as the skilled person will
appreciate.
Aims of the invention
[0015] The present invention aims to provide a method and device that can be
applied when using load balancing in a configuration comprising nodes of different
capacity and wherein the limitations of the prior art solutions are overcome.
Summary
[0016] The present invention relates to an adaptive method for determining a
data traffic load distribution over a network in an IP Multimedia Subsystem (IMS).
The network comprises a client source node and at least two server destination
nodes, whereby the at least two server destination nodes have a different capacity
for handling data transmission requests. The method comprises the steps of
- the client source node sending a request for data transmission over the
network to the at least two server destination nodes according to a clientserver
protocol,
- each of the at least two server destination nodes transmitting, in response to
the request, an indication on the status of its actual capacity,
- the client source node exploiting said status indications for deciding on load
distribution of the data traffic over the server destination nodes.
[0017] The proposed solution indeed meets the objective of the invention.
Each time a client node requests resources for transmitting data traffic, the server
destination nodes provide an indication of their actual status, e.g. the percentage of
its available maximum capacity that is in use (or that is free). From the information
coming from the server nodes, it can then be decided how to balance the load in view
of the next data traffic transmission by the client node that had requested resources.
The decision indicates which server node to use for the data transmission requested
by the client node.
[0018] In a preferred embodiment the indication on the status of the actual
capacity represents the amount of used capacity or the amount of free capacity.
[0019] The status of the actual capacity is advantageously determined taking
into account at least memory size, CPU processing time or a combination of these
parameters.
[0020] In another embodiment the network further comprises a proxy node for
establishing connection between the client source node and the server destination
node.
[0021] In a second aspect the invention relates to an interface device for use in
a network in an IP Multimedia Subsystem (IMS), whereby the network comprises a
client source node and at least two server destination nodes, the at least two server
destination nodes having a different capacity for handling data transmission requests.
The interface device comprises
- reception means for receiving from the at least two server destination nodes
an indication of the status of its actual capacity,
- processing means for determining a load distribution over the destination
nodes based on the received status indications,
- communication means for communicating the determined load distribution to
the client source node.
[0022] Preferably the processing means is arranged for performing a load
balancing algorithm, wherein the status indications are used as weight factors.
[0023] In another embodiment the interface device further comprises storage
means for storing the received status indications.
[0024] In a most preferred embodiment the interface device is arranged for
operating according to the RADIUS or Diameter protocol.
[0025] The invention also relates to a client source node for use in a network in
an IP Multimedia Subsystem, IMS, comprising an interface device as previously
described.
Brief Description of the Drawings
[0026] Fig. 1 illustrates a classical round robin algorithm as known in the art.
[0027] Fig. 2 illustrates the limitations of the prior art solution of Fig. .
[0028] Fig. 3 illustrates a set-up with a Diameter proxy node.
[0029] Fig. 4 illustrates an exemplary application of the present invention.
[0030] Fig. 5 illustrates the Diameter redirect behaviour.
[0031] Fig. 6 illustrates the performance of a round robin algorithm (prior art)
[0032] Fig. 7 illustrates the performance of a solution according to the
invention.
Detailed Description of the Invention
[0033] In the present invention an IP Multimedia Subsystem (IMS) is
considered for providing multimedia services over a network in a client-server
configuration. The server nodes do not all have the same server capacity. With
server capacity is meant the capacity in terms of CPU time, memory size, combined
CPU and memory size, combination of some system measurements, etc... Capacity
in this context gives an indication of the ability to treat future (similar) requests.
[0034] In a most preferred embodiment the client-server configuration is
arranged for operating according to the Diameter protocol or RADIUS, the
predecessor of Diameter.
[0035] Radius (Remote Authentication Dial In User Service) is a networking
protocol that provides centralized Authentication, Authorization, and Accounting
(AAA) management for computers to connect and use a network service. Because of
the broad support and the ubiquitous nature of the RADIUS protocol, it is often used
by Internet service providers and enterprises to manage access to the Internet or
internal networks, wireless networks, and integrated e-mail services. RADIUS is a
client/server protocol that runs in the application layer, using UDP as transport.
RADIUS serves three functions, namely authenticating users or devices before
granting them access to a network, authorizing those users or devices for certain
network services and accounting for usage of those services. RADIUS cannot
effectively deal well with remote access, IP mobility and policy control.
[0036] The Diameter protocol provides an upgrade path for RADIUS. Diameter
controls communication between the authenticator and any network entity requesting
authentication. The Diameter protocol defines a policy protocol used by clients to
perform Policy, AAA and Resource Control. This allows a single server to handle
policies for many services. One of the differences with RADIUS is that it uses more
reliable transport protocols (TCP or SCTP). The Diameter protocol is further
enhanced by the development of the 3GPP IP Multimedia Subsystem (IMS) that was
also discussed above.
[0037] A Diameter application is not a software application, but a protocol
based on the Diameter base protocol. Each application is defined by an application
identifier and can add new command codes and/or new mandatory Attribute-Value
Pairs. Adding a new optional AVP does not require a new application.
[0038] Optionally the client-server network further comprises one or more
proxy nodes arranged between a client and a server. The proxy node is typically
arranged for performing routing type of functions and possibly also buffering. A
Diameter proxy is defined in the Diameter base protocol rfc3588. An illustration of a
set-up including a proxy node is given in Fig. 3 . The link between the client and
'server-x' is established via the Diameter proxy node. Communication takes place in
the indicated order : the client contacts the proxy, the proxy communicates with the
server in question and then back to the client.
[0039] According to the proposed solution the server nodes return an
indication on their actual capacity, e.g. over the last x seconds, when a request for
data transmission is received from a client node. The indication can for example
contain the percentage of the maximum capacity that is available or, oppositely, the
percentage that is already in use.
[0040] The information about the actual capacity of the server nodes in the
network is received by the interface device according to the present invention and is
used for determining the load distribution over the destination nodes when the data
traffic is transmitted. Servers with a high instantaneous free capacity are selected for
the next data transmission with a higher probability. The selection algorithm also
ensures that the probability to be selected is less for server nodes that have returned
a lower availability indication. The resulting info on the actual load distribution is then
provided to the client source node before the actual transmission starts.
[0041] The interface device according to the invention is preferably comprised
in the source client node. Alternatively it can be a stand-alone device. Such a stand
alone device would then typically behave (protocol-wise) as a Diameter proxy node.
[0042] The proposed method offers several advantages. This method allows
optimal heterogeneous spreading of traffic over multiple destination nodes according
to the volume capacity of the receptors (i.e. the servers). As such the stability and
efficiency of the communication is substantially increased. Further, data collection
capacity configurations can be built using components of different capacity and
process characteristics. This allows a configuration to grow with components of
different generations or of different vendors. In the prior art solutions on the contrary,
the existing configuration needed an upgrade each time collection capacity was
added.
[0043] An important asset of the invention is that the load distribution adapts
dynamically to new situations. For instance, when remaining collection capacity of a
server node decreases temporarily because a batch process runs on that node, the
traffic distribution is dynamically updated toward a new optimum.
[0044] An example is now provided (see Fig. 4). Again four server destination
nodes node are considered with a different capacity. The portion of used capacity is
indicated for each node. A client source node has sent a request for transmitting data
traffic to a destination node. Step [1] illustrates the server nodes sending, in response
to said request, an indication of their actual status in terms of capacity. In the
example the server nodes are indicated as CDFs (Charging Data Function, which is
an IMS function defined by 3GPP in TS 32.240). The response messages of the
server nodes contains two parts : a response code (indicating that the request was
handled or not) and an additional indication (possibly implemented as sub-code) of a
measure of the capacity (in terms of CPU and/or memory size) that is still available.
As the skilled person will readily appreciate, other structures of response message
can be envisaged as well. Advantageously the indications on the availability of the
various nodes are stored in the CTF (Charging Trigger Function), which is an IMS
function defined by 3GPP in TS 32.240.
It is to be noted that the CTF implements the Rf Diameter client. In the example the
interpretation of the received load indications is indeed a part of the CTF. However,
this can also be a stand-alone function or a function implemented in a Diameter
Proxy or even a Diameter redirect.
[0045] The Diameter protocol indeed also defines a Diameter redirect node.
This node is consulted for routing purposes prior to the actual communication
between client and server. In case there is a redirect node in the signaling path, the
actual load distribution is still performed by the client. The routing task is in this case
shared over the redirect node and the client node.
[0046] For selecting the server destination node for the next message the
availability information is then taken into account as a weight by the load balancing
algorithm wherein a server node is selected. At startup (or at low traffic) the CTF
distributes the load evenly over all servers because every server returns a high
availability figure (e.g. 95%). When load increases, the availability figure returned by
servers with lower capacity decreases. Server nodes with more available capacity left
(and thus returning higher availability figures) have a higher probability of being
selected as destination than servers with little or no free capacity left. A relatively
high number of requests will have the address of such a server. Heavily loaded
destination servers are thus selected less often. Consequently, a low number of
requests get the address of such a heavily loaded server. In this way an optimal load
sharing is achieved. When collection conditions change at one of the servers (e.g. a
CPU consuming batch job is executed), the distribution pattern is dynamically
adapted to the actual situation.
[0047] The purpose of any load balancing algorithm is to generate dynamically
(in real-time) a weight indication per server, and use those indications to apply a
weighted server selection (as used in for instance DNS). One possible algorithm
could be implemented as follows. The load distribution function
o receives and stores a load indication for each of the servers
o builds a dynamic list or array with a number of entries per server
corresponding with the received value of that server. When the load indicator
corresponds to available resources then the value as is can be used. When
the load indicator corresponds to used resources, then ( 100-%used load)
information is applied.
o In every element of the array the address of (or pointer to) the corresponding
server is stored
For every new request to be transmitted the sender generates a random number
between 0 and (load indie. Server-1 + load indie. Server 2 etc.).
The random value is used as index to access the dynamic array. The destination for
the request is stored in the array element.
When the received load indication for a server x is high, then a relatively high number
of array entries will have the address of the server x . When the received load
indication for a server y is low, consequently a low number of array entries will have
the address of that server y. The probability that the random number points to server
x is higher.
[0048] An example case relates to a 3GPP Rf traffic (Diameter) load shared
over a farm of nodes with different collection capacity. Feedback of available capacity
ensures optimal load sharing.
[0049] To illustrate the performance gain achieved by the solution proposed in
the present invention the following comparison with the performance of a round robin
algorithm as in the prior art is given.
[0050] Fig.6 shows a performance curve obtained by applying a round robin
selection with a cut-off threshold of 80% of the available server capacity is used. A
'not-OK' message is sent from the moment this threshold level is reached. Traffic is
resumed when the level has dropped to 60%. These values are typically set per
configuration to "tune" the stability in overload conditions. They define a kind of
hysteresis curve which is followed in order to avoid a situation wherein the server
status is oscillating between the on and off state. The figure shows the number of
requests per second as a function of time for a configuration with four servers with a
different capacity of as well as the accumulated throughput. Overhead traffic is
observed (especially on the lower capacity nodes). A high fluctuation on the
accumulated throughput (total bandwidth) is to be noted. The maximum level (in this
case equal to 80% of the sum of the individual server node capacities) cannot be
reached.
[0051] Fig. 7 illustrates the performance obtained when using the capacity
feedback solution according to the present invention. Again the number of requests
per second as a function of time is shown for the same configuration of server nodes
as in Fig. 6 . as well as the accumulated and the optimal throughput. As opposed to
Fig. 6 , no overhead traffic is observed anymore. The oscillating behaviour has
disappeared. After a transient time the accumulated capacity corresponds to the
optimal level.
[0052] Although the present invention has been illustrated by reference to
specific embodiments, it will be apparent to those skilled in the art that the invention
is not limited to the details of the foregoing illustrative embodiments, and that the
present invention may be embodied with various changes and modifications without
departing from the scope thereof. The present embodiments are therefore to be
considered in all respects as illustrative and not restrictive, the scope of the invention
being indicated by the appended claims rather than by the foregoing description, and
all changes which come within the meaning and range of equivalency of the claims
are therefore intended to be embraced therein. In other words, it is contemplated to
cover any and all modifications, variations or equivalents that fall within the spirit and
scope of the basic underlying principles and whose essential attributes are claimed in
this patent application. It will furthermore be understood by the reader of this patent
application that the words "comprising" or "comprise" do not exclude other elements
or steps, that the words "a" or "an" do not exclude a plurality, and that a single
element, such as a computer system, a processor, or another integrated unit may
fulfil the functions of several means recited in the claims. Any reference signs in the
claims shall not be construed as limiting the respective claims concerned. The terms
"first", "second", third", "a", "b", "c", and the like, when used in the description or in
the claims are introduced to distinguish between similar elements or steps and are
not necessarily describing a sequential or chronological order. Similarly, the terms
"top", "bottom", "over", "under", and the like are introduced for descriptive purposes
and not necessarily to denote relative positions. It is to be understood that the terms
so used are interchangeable under appropriate circumstances and embodiments of
the invention are capable of operating according to the present invention in other
sequences, or in orientations different from the one(s) described or illustrated above.
CLAIMS
1. A method for determining a data traffic load distribution over a network in an
IP Multimedia Subsystem, IMS, said network comprising a client source node
and at least two server destination nodes, said at least two server destination
nodes having a different capacity for handling data transmission requests, said
method comprising the steps of
- said client source node sending a request for data transmission over said
network to said at least two server destination nodes according to a clientserver
protocol,
- each of said at least two server destination nodes transmitting, in response to
said request, an indication on the status of its actual capacity,
- said client source node exploiting said status indications for deciding on load
distribution of said data traffic over said server destination nodes.
2 . Method for determining a traffic load distribution as in claim 1, wherein said
indication on the status of the actual capacity represents the amount of used
capacity or the amount of free capacity.
3 . Method for determining a traffic load distribution as in claim 1 or 2 , wherein the
status of the actual capacity is determined taking into account at least memory
and/or processing time.
4 . Method for determining a traffic load distribution as in any of claims 1 to 3 ,
whereby said network further comprises a proxy node for establishing
connection between said client source node and said server destination node.
5 . Interface device for use in a network in an IP Multimedia Subsystem, IMS,
said network comprising a client source node and at least two server
destination nodes, said at least two server destination nodes having a different
capacity for handling data transmission requests, said interface device
comprising
- reception means for receiving from said at least two server destination nodes
an indication of the status of its actual capacity,
- processing means for determining a load distribution over said destination
nodes based on the received status indications,
- communication means for communicating the determined load distribution to
said client source node.
6 . Interface device as in claim 5 , wherein said processing means is arranged for
performing a load balancing algorithm, wherein said status indications are
used as weight factors.
7 . Interface device as in any of claims 5 or 6 , further comprising storage means
for storing said received status indications.
8 . Interface device as in any of claims 5 to 7 , arranged for operating according to
the RADIUS or Diameter protocol.
9 . Client source node for use in a network in an IP Multimedia Subsystem, IMS,
comprising an interface device as in any of claims 5 to 8 .

Documents

Application Documents

# Name Date
1 66-CHENP-2013 POWER OF ATTORNEY 03-01-2013.pdf 2013-01-03
1 66-CHENP-2013-AbandonedLetter.pdf 2018-10-03
2 66-CHENP-2013 PCT PUBLICATION 03-01-2013.pdf 2013-01-03
2 66-CHENP-2013-FER.pdf 2018-03-05
3 66-CHENP-2013 FORM-5 03-01-2013.pdf 2013-01-03
3 66-CHENP-2013 CORRESPONDENCE OTHERS 03-03-2015.pdf 2015-03-03
4 66-CHENP-2013 FORM-3 03-03-2015.pdf 2015-03-03
4 66-CHENP-2013 FORM-3 03-01-2013.pdf 2013-01-03
5 66-CHENP-2013 FORM-2 FIRST PAGE 03-01-2013.pdf 2013-01-03
5 66-CHENP-2013 FORM-3 20-10-2014.pdf 2014-10-20
6 66-CHENP-2013 FORM-18 03-01-2013.pdf 2013-01-03
6 66-CHENP-2013 CORRESPONDENCE OTHERS 20-10-2014.pdf 2014-10-20
7 abstract66-CHENP-2013.jpg 2014-05-08
7 66-CHENP-2013 FORM-1 03-01-2013.pdf 2013-01-03
8 66-CHENP-2013 DRAWINGS 03-01-2013.pdf 2013-01-03
8 66-CHENP-2013 CORRESPONDENCE OTHERS 07-02-2014.pdf 2014-02-07
9 66-CHENP-2013 FORM-3 07-02-2014.pdf 2014-02-07
9 66-CHENP-2013 DESCRIPTION (COMPLETE) 03-01-2013.pdf 2013-01-03
10 66-CHENP-2013 CORRESPONDENCE OTHERS 03-01-2013.pdf 2013-01-03
10 66-CHENP-2013 CORRESPONDENCE OTHERS 08-10-2013.pdf 2013-10-08
11 66-CHENP-2013 CLAIMS SIGNATURE LAST PAGE 03-01-2013.pdf 2013-01-03
11 66-CHENP-2013 FORM-3 08-10-2013.pdf 2013-10-08
12 66-CHENP-2013 CLAIMS 03-01-2013.pdf 2013-01-03
12 66-CHENP-2013 CORRESPONDENCE OTHERS 19-06-2013.pdf 2013-06-19
13 66-CHENP-2013 FORM-3 19-06-2013.pdf 2013-06-19
13 66-CHENP-2013.pdf 2013-01-07
14 66-CHENP-2013 FORM-3 19-06-2013.pdf 2013-06-19
14 66-CHENP-2013.pdf 2013-01-07
15 66-CHENP-2013 CLAIMS 03-01-2013.pdf 2013-01-03
15 66-CHENP-2013 CORRESPONDENCE OTHERS 19-06-2013.pdf 2013-06-19
16 66-CHENP-2013 CLAIMS SIGNATURE LAST PAGE 03-01-2013.pdf 2013-01-03
16 66-CHENP-2013 FORM-3 08-10-2013.pdf 2013-10-08
17 66-CHENP-2013 CORRESPONDENCE OTHERS 08-10-2013.pdf 2013-10-08
17 66-CHENP-2013 CORRESPONDENCE OTHERS 03-01-2013.pdf 2013-01-03
18 66-CHENP-2013 FORM-3 07-02-2014.pdf 2014-02-07
18 66-CHENP-2013 DESCRIPTION (COMPLETE) 03-01-2013.pdf 2013-01-03
19 66-CHENP-2013 CORRESPONDENCE OTHERS 07-02-2014.pdf 2014-02-07
19 66-CHENP-2013 DRAWINGS 03-01-2013.pdf 2013-01-03
20 66-CHENP-2013 FORM-1 03-01-2013.pdf 2013-01-03
20 abstract66-CHENP-2013.jpg 2014-05-08
21 66-CHENP-2013 CORRESPONDENCE OTHERS 20-10-2014.pdf 2014-10-20
21 66-CHENP-2013 FORM-18 03-01-2013.pdf 2013-01-03
22 66-CHENP-2013 FORM-3 20-10-2014.pdf 2014-10-20
22 66-CHENP-2013 FORM-2 FIRST PAGE 03-01-2013.pdf 2013-01-03
23 66-CHENP-2013 FORM-3 03-01-2013.pdf 2013-01-03
23 66-CHENP-2013 FORM-3 03-03-2015.pdf 2015-03-03
24 66-CHENP-2013 CORRESPONDENCE OTHERS 03-03-2015.pdf 2015-03-03
24 66-CHENP-2013 FORM-5 03-01-2013.pdf 2013-01-03
25 66-CHENP-2013-FER.pdf 2018-03-05
25 66-CHENP-2013 PCT PUBLICATION 03-01-2013.pdf 2013-01-03
26 66-CHENP-2013-AbandonedLetter.pdf 2018-10-03
26 66-CHENP-2013 POWER OF ATTORNEY 03-01-2013.pdf 2013-01-03

Search Strategy

1 search_15-12-2017.pdf