Sign In to Follow Application
View All Documents & Correspondence

Method And Device For Retrieving Data And Transforming Same

Abstract: ABSTRACT METHOD AND DEVICE FOR RETRIEVING DATA AND TRANSFORMING SAME INTO QUALITATIVE DATA OF A TEXT-BASED DOCUMENT Method for extracting information from a data file comprising a first step wherein the data are transmitted to a device (3.1) or "tokenizer" adapted to convert them in the course of a first step into elementary units or "tokens", the elementary units being transmitted to a second step of searching the dictionaries (3.2) and a third step (3.3) of searching in grammars, characterized in that, for the conversion step, a sliding window of given size is used, the data are converted into "tokens" as and when they arrive in the tokenizer and the tokens are transmitted as and when they are formed to the step of searching in dictionaries (3.2), then to the step of searching the grammars (3.3). Figure 3 to be published

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 August 2008
Publication Number
11/2009
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

THALES
45, RUE DE VILLIERS, 92200 NEUILL SUR SEINE,

Inventors

1. JULIEN LEMOINE
8 ALLE DES PIERRATS, F-95870 BEZONS

Specification

METHOD AND DEVICE FOR RETRIEVING DATA AND TRANSFORMING SAME INTO QUALITATIVE DATA OF A TEXT-BASED DOCUMENT
The invention relates notably to a method for extracting information and for transforming it into qualitative data of a textual document.
It is used notably in the field of the analysis and the comprehension of textual documents.
In the description, the word "token" denotes the representation of a unit by a bit pattern and "tokenizer" denotes the device adapted to perform this conversion. Likewise, the term "match" connotes "identification" or "recognition".
In the presence of unstructured documents, for example texts, the problem posed is to extract the relevant item of information while managing the complexity and ambiguities of natural language.
Today, information streams are increasingly present and their analysis is necessary if one wishes to improve the productivity and speed of reading of texts.
Several extraction procedures are known in the prior art. For example, the procedure used by AT&T, an example of which is accessible via the Internet link http://www.research.att.com/sw/tools/fsm/, the procedure developed by Xerox illustrated on the Internet link http://www.xrce.xerox.com/competencies/content-analysis/fst/home.en.html and the procedure used by Intex/Unitex/Nooj illustrated on the link http;//www-igm.univ-mlv.fr/~unitex/.
However, all these techniques have the drawbacks of not being sufficiently flexible and efficacious, since the stress has been placed on the linguistic aspect and on the power of expression, rather than on the industrial aspect. They do not make it possible to process significant streams in a reasonable time while preserving the quality of analysis.
The object of the invention relies notably on a novel approach: a window size is chosen at the beginning of the method, the "tokens" are processed one by one, the tokens arriving in a stream, this being followed by the application of the dictionary search and the grammars receiving the "tokens" one after another, in the case where they are used in a sequential manner.
The subject of the present invention relates to a method for extracting

information from a data file comprising a first step wherein the data are transmitted to a device or "tokenizer" adapted to convert them in the course of a first step into elementary units or "tokens", the elementary units being transmitted to a second step of searching the dictionaries and a third step of searching in grammars,, characterized in that, for the conversion step, a slfding window of given size is used, the data are converted into "tokens" as and when they arrive In the tokenizer and the tokens are transmitted as and when they are formed to the step of searching in dictionaries, then to the step of searching the grammars.
The subject of the present invention offers notably the following advantages:
• the architecture makes it possible to avoid duplication of data and to use several grammars in parallel or in series without any intermediate result,
• on account of the speed of the procedure implemented, it is possible to apply a multitude of complex grammars and therefore to extract a large amount of information from the documents without degrading the linguistic models,
• the architecture innately manages the priority of the grammars, thereby making it possible to define "tiered models".
Other characteristics and advantages of the present invention will be more apparent on reading the description which follows of an example given by way of wholly nonlimiting illustration accompanied by the figures which represent:
• Figure 1, a functional diagram of the general operation of the processing chain in the field of document analysis,
• Figure 2, a functional diagram of the processing which can be performed in a processing chain,
• Figure 3, a functional diagram of the method according to the invention making it possible to extract entities, relations between these entities, and to convert documents into digital data,
• Figure 4, an exemplary automaton for converting a code (grammatical, flexional, semantic or syntactic) into integer,
• Figure 5, an automaton making it possible to recognize a series of integers representing the codes (grammatical, flexional, semantic and syntactic) defined in Figure 4,

• Figure 6, a method for constructing an optimal sub-dictionary for a set of grammars on the basis of an original dictionary,
• Figure 7, a method for deleting the empty transitions in a transducer,
• Figure 8, an exemplary automaton for illustrating the method of Figure 7,
• Figure 9, the output of the method of Rgure 7 applied to the automaton of Figure 8,
• Figure 10, a set of lemmas and inflected forms before separation into two automata,
• Figure 11, the letter-based automaton for the lemmas of Figure 10,
• Figure 12, the letter-based automaton for the inflected forms of Figure 10,
• Figure 13, the steps of a method making it possible to calculate the successor nodes of a node of the automaton on the basis of an entry,
• Figure 14. a use of the rewrite and extraction grammars,
• Figure 15, a method of detecting the "matches" in an automaton,
• Figure 16, a method of updating the potential "matches", this method is used by the method of Figure 15,
• Figure 17, the management of the priority between two grammars G1 and G2 (G2 taking priority over G1) via a procedure for scoring or selecting the "match" of higher priority when there is overlap,
• Figure 18, the management of disambiguation when there is an overlap between an extraction grammar and a disambiguation grammar, and
• Figure 19 an exemplary application of the method according to the invention in respect of a messaging server.
Figure 1 represents a general processing chain for analyzing documents. In the majority of cases, this chain comprises, for example:
- an element intended to convert any entry format to a text format, block 1.1,
- a module for extracting meta-data such as the date, the author, the source.

etc., block 1.2,
- a module for processing these documents, Wock 1.3,
- an indexation module, block 1.4, for searches and subsequent uses.
The method according to the invention lies more particularly at the level of the processing block 1.3.
In Figure 2 are illustrated examples of conventional processing operations such as the summarizing of documents, 4 or the search for double documents, 5.
The function of the method according to the invention is notably to perform the following processing operations:
• the extraction of entities 6: for example the extraction of persons, facts, gravity of a document, feelings, etc.
• the extraction of relations 7 between the entities: for example, relations between dates and facts, between persons and facts, etc.
• the conversion 8 of a document irtto a set of digital data for a subsequent processing such as automatic classification, knowledge management, etc.
To perform these processing operations, a set of documents is used, for example, in the form of ASCII or Unicode files or memory areas. The method for transforming a text described in Figure 3 is then applied, this decomposing notably into 3 principal steps:
1) splitting of a source document into a set of elementary units or "tokens", by a device or "Tokenizer", 3.1, suitable for converting a document into elements,
2) recognition of the simple and compound units, 3.2, present in the dictionaries,
3) applications of grammars, 3.3.
Step 3.1
The method according to the invention uses a sliding window of units, that is to say it preserves only the last X "tokens" of the text (X being a fairly large number since it determines the maximum number of units which will be able to be rewritten

by a grammar). The size of the sliding window is chosen at the beginning of the method.
During the step of converting the data into "tokens", the tokenizer 3.1 converts the data as and when they are received before transmitting them in stream form to the step of searching in a dictionary, 3.2.
The types of "tokens" are for example:
• space: carriage return, tabulation, etc.
• separator: slash; parentheses; square brackets; etc.
• punctuation: comma, semicolon, question mark, exclamation mark, etc.
• number only: from 0 to 9,
• alphanumeric: set of alphabetic characters (dependent on the language) and numbers,
• end of document.
The "tokenizer" 3.1 is provided, for example, with a processor suitable for converting a lowercase character into an uppercase character and vice versa, since this depends on the language.
As and when they are output from the "tokenizer", 3.1, the "tokens" are transmitted gradually to the step of searching in the dictionaries, 3.2.
Step 3.2, the search in the dictionaries
The dictionaries 3.2 consist of entries composed notably of the following elements:
• an inflected form,
• a lemma,
• a grammatical label or "tag",
• a set of flexional codes,
• a set of semantic codes,

• a set of syntactic codes.
The dictionary 3.2 is, for example, a letter-based automaton each node of which possesses linguistic attributes and may or may not be final. A node is final when the word is completely present in the dictionary.
The "tokens" are transmitted to the module for searching the dictionaries 3.2 in stream form, that is to say they arrive one after another and are processed in the same manner one after another by the module 3.2. For each "token", the module checks to verify whether it does or does not correspond to a dictionary entry.
In the case where a "token" corresponds to a dictionary entry, then the method processes the following two cases:
• either the corresponding node of the automaton is a final node: in this case the dictionary entry is added to the "token" window, as is the position of the "token" and of the node of the automaton to a list so as to identify a potential compound entity,
• or the node is not a final node, in this case, the position of the "token" is just an addition to identify a potential compound entity.
In the second case, it is not yet known whether the entry is or is not a compound entity of the dictionary, since it corresponds only to the beginning (for example "pomme" is received which corresponds partially to the compound entity "pomme de terre"). If the continuation, "de terre", is received later, then the compourKi entity has been detected, othenwise the potential entity is deleted since it is not present.
An option of the search in the dictionaries makes it possible to specify that the lowercase characters in the dictionary can correspond to an uppercase or lowercase character in the text. On the other hand, an uppercase character in the dictionary can correspond only to an uppercase character in the text. This option makes it possible notably to take into account poorly formatted documents such as, for example, a text fully in uppercase (often encountered in old databases).
According to a variant embodiment of the method and with the aim of optimizing the search times, the method constructs a subset of the dictionary during compilation of the latter. An exemplary implementation of steps is given in Figure 6.
The method recovers all the transitions of the grammars which refer to the

dictionary (lemmas, grammatical tags, etc.). All these transitions are compiled and all the dictionary entries which correspond at least to one of these transitions are selected. The dictionary entries recognize at least one of the transactions.
For example, if a grammar contains only the transitions and as referring to the dictionary, only the entries of the dictionary which are verbs or adverbs with Time as semantic code will be extracted.
The process for compiling the transitions into a unique transition comprises for example the following steps:
• the first step consists in extracting, from all the grammars used, the set of grammatical, semantic, syntactic and flexional codes contained in each of the transitions of the grammars, and
• during a second step, a letter-based automaton is constructed which associates a unique integer with each code.
Each set of codes therefore consists of a set of integers that are ordered from the smallest to the largest and that are inserted irtfo an integer-based automaton so as to determine whether or not this code combination is present in the graphs.
If, for example, the grammars contain the codes ADV+Time and V, then this is the automaton which transforms the codes into integer of Figure 4.
This automaton converts:
• the character string "ADV" into an integer value: 1
• the character string "V" into an integer value: 2
• the character string "Time" into an integer value: 3
Once the automaton converting the codes into integer has been constructed, the second automaton representing the transitions is constructed (Figure 5). On this automaton, the transition ADV+Time is represented by node 2 and the transition V by node 3.
Similarly, a text-based automaton is constructed for the set of lemmas used in the grammars. The lemmas being text, it is easy to contemplate the conversion in a text-based automaton.

In detail, the diagram of Figure 6 illustrates the construction of an optimal sub-dictionary. It comprises for example the following steps: for each entry E of the dictionary D, 10, 12, a ctieck, 13, is made to verify whether E "matches" the automaton T representing the transitions or, 14, the automaton L containing the lemmas. If this is the case, E is added, 15, to the sub-dictionary O. This process is repeated for all the entries of the dictionary D.
By this dictionary pruning, the smallest possible dictionary is constructed for a given application, thereby making it possible to gain in performance on most grammars.
The elements arising from the dictionary search step are transmitted one by one and in stream form to the step of applying the grammars, an example of which is detailed hereinafter.
Step 3.3, application of the grammars to the elements arising from the step of searching the dictionaries.
Advantageously, the method implements grammars which have been compiled.
Compilation of the grammars
Before even being able to use the grammars in the method according to the invention, a compilation is performed which can be decomposed into two steps:
- The deletion of the empty transitions,
- The decomposition of the transitions into letter-based automaton.
Figure 7 describes an exemplary series of steps making it possible to delete the empty transitions of an automaton, 20.
For all the nodes N of the automaton A, 21, for all the transitions T from node N to a node M. If the transition T is an empty transition and M is a final node, then T is deleted, 26, and all the transitions which have M as starting nodes are duplicated while putting N as new starting node (the destination node is not changed). If the transition T is an empty transition and M is a non-final node, then T is deleted and all the transitions which have M as destination node are duplicated, 27 while putting N as new destination node (the source node is not changed). All the inaccessible nodes, 28, not accessible by the original node are deleted.
Figures 8 and 9 show diagrammatically a replacement automaton on which

the method described in conjunction with Figure 7 is applied and the result obtained. This modification of the automaton makes it possible to simplify the traversal thereof since the empty transitions are always Irue' and must always be traversed. The second step consists in transforming the set of lemmas and the set of inflected forms, contained in the transitions of the automaton into two new letter-based automata so as to speed up the searches for subsequent nodes.
For example, the transitions from node 0 to 1 in Figure 10 contain a set of lemmas and inflected forms.
A conventional search ought therefore to scan the whole set of these transitions to detect those which may correspond to the entry received.
The transformation of this set of lemmas and inflected form gives two automata:
• the first automaton contains only the lemmas, that is to say "lemma",
"other" and "test" as shown by Figure 11,
• the second automaton contains only the inflected forms, that is to say
"form", "inflected" and "test" as shown by the automaton of Figure 12.
In the method according to the invention, a transition from a node to N other nodes is defined notably by a set of three automata:
• the automaton of the lemmas,
• the automaton of the inflected forms,
• the automaton of the grammatical, syntactic, semantic and flexlonal codes.
Each of these automata returns an integer. If there is a recognition or "match", this integer is in fact an index of an array in which the set of subsequent nodes accessible by this state is stored.
Figure 13 represents various steps making it possible to calculate the successor nodes on the basis of an entry of the sliding window of "tokens".
The method described in Figure 13 comprises, for example, the steps described hereinafter. When a token arrives there are two possibilities:
1) the token is an entry of the dictionary, it is then recognized by the

dictionary,
2) the token is not recognized by the dictionary.
The aim is to calculate for a current node N, the set of new nodes reachable by an entry E of the sliding window.
If the entry E is an entry of the dictionary, 30, a search, 31, is made for the nodes which can be reached by E in the automaton of the codes (grammatical, syntactic, semantic and flexional) of node N and, 32, in the automaton of the lemmas of node N. All these nodes which can be reached are added to the list L.
If the entry E Is not an entry of the dictionary, a search, 33, is made for the nodes that can be reached by E in the automaton of the inflected forms of node N and they are added to the list L.
Application of the grammars to the sliding window of tolcens
The local grammars are decomposed, for example, in two ways:
- the extraction-only grammars (represented by finite-state automata) which are executed in parallel,
- the rewrite grammars (represented by transducers) which are applied in a sequential manner.
Diagram 14 illustrates the use of the rewrite grammars (or transformation) and extraction grammars on streams of tokens and the dictionary entries.
Extraction grammar
The extraction grammars 42i use the previously defined series of tokens and of entries of the dictionary 40 to detect a "match" in an automaton.
For this purpose, use is made of a list of potential extraction candidates denoted P which contains the following elements:
- the index of the next node to be tested,
- the position of the next token expected,
- the original position of this candidate.
This information makes it possible to detect whether or not a new token

"completes" a potential "match" by looking to see whether its position is the one expected and whether it validates one or more transitions.
An exemplary sub-method making it possible to update the potential "matches" and to detect the complete "matches" is described in Figure 15, which itself uses a sub-method for updating the list of potential clients, the steps of which are detailed in Figure 16.
Figure 15 represents an example of steps making it possible to update the potential "matches" and to detect the complete "matches".
Let P be the list of potential extraction candidates and Q an empty list, A a transducer or extraction grammar and T an entity.
For all the potential extraction candidates N of the list P, a search is made for the nodes that are accessible from node P using the entry T by the method of searching for the successor nodes described in Figure 13. All the accessible nodes are then added to the list Q using the list updating method described below, 51, 52, 53.
Once the list P has been fully traversed, a search is made for the nodes accessible from the original node of the grammar using the entry T by the method of searching for the successor nodes. Figure 13. All the accessible nodes are then added, 54, 55 to the list Q using the list updating method described in relation to Figure 16. The elements of the list Q are added to the list P.
The updating method described in Figure 16 comprises notably the following steps:
- let P be the list of potential extraction candidates, N the list of nodes that can be reached,
- for ail the nodes I identified as being accessible by the preceding method, 61, 62, if I is a final (or terminal) node of the grammar, 63, then this is an occurrence of the extraction grammar ("match"). If I possesses transitions to other nodes, 64, I is added expecting the next entry to the list P, 65.
The application of the dictionaries makes it possible furthermore to detect compound entities consisting of several tokens. This is the reason why the module for searching in the dictionaries informs the grammars that a position can no longer be reached and that it is henceforth impossible to receive data at this position. The

search module dispatches, for example, a message to the following module which relays it in its turn to the sub-module (when sequential grammars are used).
The set of possible "matches" has therefore been successfully recovered with an approach enabling potential candidates to be rapidly added/removed.
The selection of the longest "match" or using another criterion such as the priority of one grammar over another requires only a linear passage over the "matches" identified.
Rewrite grammar
The rewrite grammars operate in the same manner as the extraction grammars, except that each "match" requires a partial or total modification of the tokens involved.
The operating procedure, according to the invention, for this type of grammar consists notably in storing the result directly in the window of tokens. Each rewrite grammar has its own window which will be transmitted to the following grammars in the processing chain, as shown diagrammatically in Figure 14.
There are two types of execution possible for these grammars:
• rewriting while preserving the largest "match", this is typically the case for a grammar for recognizing sentences which adds a token at the end of each sentence,
• identification of all the "matches" to fill a database for example (conversion of text into digital data).
Identification of ail the "matches" for transformation into structured data
In this case, each element of the list of potential candidates P is furnished with a list of references to the transformations to be applied to the tokens.
We can then apply a transformation by a letter-based automaton to each variable so as to return to qualitative data and thus transform the text into structured data.
Rewriting while preserving the largest "match"
This implementation is used during the application of an end-of-sentence

recognition grammar.
The largest "matcli" may correspond:
- either to the end of a sentence (the end-of-sentence token is thus
added),
- or to a disambiguation (for example "M. Example" does not correspond
to the end of a sentence).
The result of this rewrite Is used by other grammars. It is therefore necessary to be capable of making modifications to a stream of tokens. Accordingly, we decide to store the results of the "matches" in the window of tokens, this makes it possible to:
- render this rewrite transparent for the following grammars,
- select the largest "match" easily: it suffices to look at the existing
replacements and to preserve the largest.
Application of the grammars in parallel
The use of grammars in parallel is allowed innately by the architecture. Specifically, it suffices to provide the stream of tokens exiting a grammar to several other grammars at the same time so as to obtain parallelism at the extraction level.
Taking the case of the extraction of named entities, we apply a grammar for identifying sentences then we provide this result to the various extraction grammars (for example place, date, organization, etc.). The same parallelism as that described in Figure 14 is thus obtained.
Priorities of the grammars
According to a variant implementation of the invention, the method implements priority rules or a statistical scoring on the results of the extraction grammars.
Thus, if we have N grammars, knowing that the grammar Gi (i belongs to 1..N) takes priority over the grammars G1.. G(i - 1), the procedure consists in using the N grammars in a parallel or sequential manner to extract the set of possible "matches" and preserve only the "match" of highest priority when there is an intersection between two "matches".
Depending on the applications, it will be possible to select:

- the "match" of highest priority for each sentence,
- one or more "matches" per sentence knowing that there is no
intersection between them,
- a score per sentence, the score being defined by the set of "matches".
Figure 17 iHustrates an example of managing the priority between two grammars G1, 70, and G2, 71, (G2 taking priority over 01) via a procedure for scoring or for selecting the "match" of higher priority when there is overlap.
Disambiguation
The method can also comprise a step, the function of which is notably to resolve ambiguity "disambiguation". For this purpose, each extraction grammar is separated into two parts:
- the extraction grammar, 72, as such,
- one or more grammars making it possible to resolve an "ambiguity", 73,
and making it possible to define "counter examples".
It then suffices to simply extract all the "matches" of these grammars in parallel and to delete the "matches" when there is an intersection between an extraction grammar and an ambiguity resolving grammar, as shown by the diagram of Figure 18.
Figure 19 represents an exemplary use of the method according to the invention in an email messaging server, the content of whose arriving or incoming messages is analyzed, information is extracted from the message received by the method, 83, by executing the method steps detailed above, so as to determine the most suitable department of a company for dealing with it (for example, marketing, accounts, technical) and transmits it, 84, to the appropriate department to deal with it.

1. A method for extracting information from a data file comprising a first step wherein the data are transmitted to a device (3.1) or "tokenizer", which is a processor, which is adapted to convert them in the course of a first step into elementary units or "tokens", the elementary units being transmitted by the said device (3.1) to a second step of searching the dictionaries (3.2) and a third step (3.3) of searching in grammars, characterized in that, for the conversion step, a sliding window of given size is used, the data are converted into "tokens" as and when they arrive in the tokenizer and the tokens are transmitted as and when they are formed to the step of searching in dictionaries (3.2), then to the step of searching the grammars (3.3) by the said device (3.1) or "tokenizer".
2 - The method as claimed in claim 1, characterized in that it comprises a step of
generating a subset of the dictionary comprising the following steps:
• recovering all the transitions of the grammars which refer to the dictionary
(lemmas, grammatical tags, etc.),
• compiling all the transitions, and
• selecting the dictionary entries which correspond at least to one of these
transitions.
3 - The method as claimed in claim 2, characterized in that the step of compiling the
transitions into a unique transition comprises the following steps:
• the first step consists in extracting, from all the grammars used, the set of the
grammatical, semantic, syntactic and flexional codes contained in each of the transitions of the grammars, then,
o the second step in constructing a letter-based automaton which associates a unique integer with each code.
4 - The method as claimed in claim 1, characterized in that it comprises a step of constructing an optimal sub-dictionary comprising at least the following steps: for each entry E of a dictionary D, a check is carried out to verify whether the entry E recognizes at least one of the transitions or at least one lemma of the grammars which refer to the dictionary.
5 - The method as claimed in claim 1, characterized in that use is made of a local grammar on the sliding window of the tokens, the grammar comprising an extraction

grammar and a rewrite grammar.
6 - The method as claimed in claim 1, characterized in that it uses compiled
grammars, a grammar being defined by a finite-state automaton, the compilation
step comprising:
• the deletion of the empty transitions,
• the decomposition of the transitions into letter-based automaton.
7 - The method as claimed in claim 6, characterized in that the step of deleting the
empty transitions of an automaton A composed of several nodes comprises the
following steps: for all the nodes N (21) of the automaton A, for all the transitions T
from node N to a node M,
- -if the transition T is an empty transition, and if M is a final node, then the transition T is deleted and all the transitions which have M as starting node are duplicated (26) while putting N as new starting node,
- if the transition T is an empty transition and M is a final node, then T is deleted and all the transitions which have M as destination node are duplicated (27) while putting N as new destination node.

8 - The method as claimed in claim 7, characterized in that a transition from a node to N other nodes is defined by a set of three automata: the automaton of the lemmas, the automaton of the inflected forms, the automaton of the grammatical, syntactic, semantic and flexional codes.
9 - The method as claimed in claim 7, characterized in that the calculation for a current node of the set of new nodes that can be reached by an entry E of the sliding window of "tokens" comprises the following steps:

• if the entry E is an entry of the dictionary (30), a search (31) is made for the nodes which can be reached by E in the automaton of the codes (32) of node N and in the automaton of the lemmas of node N and the nodes that can be reached are added to a list L,
• if the entry E is not an entry of the dictionary, a search (33) is made for the nodes that can be reached by E in the automaton of the inflected forms of node N and they are added to the list L.

10 - The method as claimed in claim 1, characterized in that an extraction grammar (42) uses the series of tokens and of entries of the dictionary (40) to detect the "identifications" in an automaton, and in that use rs made of a list of potential extraction candidates P containing the following dements: the index of the next 5 node to be tested, the position of the next 'token" expected, the original position of this candidate.

Documents

Application Documents

# Name Date
1 4302-CHENP-2008 FORM-18 15-01-2010.pdf 2010-01-15
1 4302-CHENP-2008-AbandonedLetter.pdf 2017-08-23
2 4302-CHENP-2008-FER.pdf 2017-02-10
2 4302-chenp-2008 pct.pdf 2011-09-04
3 4302-chenp-2008 pct search report.pdf 2011-09-04
3 4302-chenp-2008 abstract.pdf 2011-09-04
4 4302-chenp-2008 claims.pdf 2011-09-04
4 4302-chenp-2008 form-5.pdf 2011-09-04
5 4302-chenp-2008 form-3.pdf 2011-09-04
5 4302-chenp-2008 correspondence-others.pdf 2011-09-04
6 4302-chenp-2008 form-1.pdf 2011-09-04
6 4302-chenp-2008 description (complete).pdf 2011-09-04
7 4302-chenp-2008 drawings.pdf 2011-09-04
8 4302-chenp-2008 form-1.pdf 2011-09-04
8 4302-chenp-2008 description (complete).pdf 2011-09-04
9 4302-chenp-2008 form-3.pdf 2011-09-04
9 4302-chenp-2008 correspondence-others.pdf 2011-09-04
10 4302-chenp-2008 claims.pdf 2011-09-04
10 4302-chenp-2008 form-5.pdf 2011-09-04
11 4302-chenp-2008 abstract.pdf 2011-09-04
11 4302-chenp-2008 pct search report.pdf 2011-09-04
12 4302-CHENP-2008-FER.pdf 2017-02-10
12 4302-chenp-2008 pct.pdf 2011-09-04
13 4302-CHENP-2008-AbandonedLetter.pdf 2017-08-23
13 4302-CHENP-2008 FORM-18 15-01-2010.pdf 2010-01-15

Search Strategy

1 search_4302_09-02-2017.pdf