Sign In to Follow Application
View All Documents & Correspondence

"Method And System For A Speech Based Multicriteria Self Learning Retrieval Of Files"

Abstract: This document discusses, among other things, a self-learginc independent audio file retrieval system with a multi-criteria text cum speech query search. The search criteria include song name, artist"s name, album name, genre, and predefined voice tags. The search can be streamlined by adjusting the tolerance levels. The results are presented in a ranked order and can be refined based on the user"s feedback and suggestions, on the basis of which deductions are made for a more useful search in the future. It also has the advantage of conserving power by operating in separate modes including the speech mode wherein the user"s speech inputs are taken, only during which the power is given to the microphone and recording equipment. It is also equipped with the feature of offering suggestions related to completing or correcting a query. The system keeps on learning based on its experience to be able to recognize confusing or new words. The file chosen by the user is then played.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 November 2006
Publication Number
20/2008
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SAMSUNG INDIA ELECTRONICS PVT LTD.
B-1,SECTOR -81,PHASE II NOIDA-201305,INDIA

Inventors

1. SIDDHARTH MATHUR
C/O SAMSUNG INDIA ELECTRONICS PVT,LTD,B-1 SECTOR -81 PHASE II NODIA-201305,INDIA
2. JATIN KUMAR
C/O SAMSUNG INDIA ELECTRONICS PVT,LTD,B-1 SECTOR -81 PHASE II NODIA-201305,INDIA
3. DR,SHAILENDRA SINGH
C/O SAMSUNG INDIA ELECTRONICS PVT,LTD,B-1 SECTOR -81 PHASE II NODIA-201305,INDIA

Specification

METHOD AND SYSTEM FOR A SPEECH-BASED MULTICRITERIA SELF-LEARNING RETRIEVAL OF FILES
FIELD OF THE INVENTION
The present invention pertains to a method and system for a speech-based multi-criteria, self-learning retrieval of audio files from a storage media and particularly to portable audio devices by user-provided speech inputs.
BACKGROUND OF THE INVENTION
Portable gadgets are powerful mass media for communication and entertainment. Handheld devices are even more popular as they are more convenient and practical to use. With the increasing demand and huge variety of such appliances available in the market, better and more efficient technology to aid the use of these systems can prove to be a valuable asset.
Handheld devices generally provide a simple user interface and store a comprehensive library of data on a built-in hard drive or flash memory. When data is to be retrieved, the storage has to be traversed to locate the data or it can be searched using textual keywords. This may be time consuming if the stored data is heavy. Further, typing in incorrect or incomplete text may result in a wrong search result. Also, in some devices it is required to associate a tag with each data on the basis of which the search is carried out. This leads to greater storage and rnaintainance costs. The task of associating tags with the data is also very time consuming and tedious. These search systems use a fixed dictionary of words that can be recognized which restricts the queries that would produce results. Hence, searching for any data within device storage via text inputs may prove to be cumbersome.

US patent 6996531 describes a speech input search system. It is a method of performing a speech based search wherein the speech input by the user is recognized by an acoustic model and a language model. After the speech recognition process, a text retrieval process is executed. The search is then carried out in a text database. The desired results are ranked according to the relevance to the input search. The speech recognition process is refined if mapping between input search and results is not accurate. Here, the search is based on formulating the search query by prompts via automatic questioning. It supports only query based searches and not multi-criteria searches.
US patent 5481595 describes a telephone number recognition of spoken telephone number in a voice message stored in a voice messaging system. It is a voice messaging system such as a telephone-answering device, which allows automatic identification and tagging of a voice clip portion of a full voice message that contains a spoken telephone number (e.g., a call back number). The voice clip may be tagged for later playback separate from playback of the full voice message. The full voice message may be deleted, leaving just the voice clip portion containing the spoken telephone number. The spoken telephone number may be processed through an appropriate voice recognition application program to generate textual information regarding the spoken telephone numbers, which may then be displayed. Call related information such as Caller ID information might be displayed together with the displayed textual voice clip information. The voice clip portions of the full voice message may be identified either in substantially real-time, or off-line during periods of non-use of the telephone-answering device. Here, it is the search is based on the tag associated with each file. Hence, a tag must be defined for each data item, which would be very time consuming and tedious for the customer.
US patent 6845251 describes an advanced voice recognition phone interface for in-vehicle speech recognition applications. It is a method for controlling a phone

system having speech recognition capabilities. It includes entering a phone number into a phone system using a first voice command, dialing the phone number using a second voice command, associating the phone number with a tag using a third voice command, and storing the tag into the phone directory using a fourth voice command. The phone system of the present invention repeats the voice commands after the system receives each of the commands from a user. Here, the search is based only on tags, the storage of which would have unnecessary storage and maintenance costs associated with them.
US patent 5481595 describes a voice tag in a telephone auto-dialer. It describes a portable telephone that comprises of a data memory having a plurality of data storage locations for storing telephone numbers used to initiate telephone calls as an auto-dialer function. An audio memory is also disclosed which comprises of a plurality of audio storage locations, each of which may be linked to one of the data storage locations. A controller with a key matrix for inputting commands and a display for displaying status is included such that commands can be entered that cause the controller to sequentially recall telephone numbers stored in the data memory and play back voice tags stored in the audio memory, that are linked with the data memory. A loud speaker is disclosed for playing back the audio tags at a loud volume. When a tag is heard that represents the desired call destination, that call is initiated. A recording function is provided that allows utterance spoken into a microphone in the portable telephone to be recorded into the various audio storage locations in the portable telephone. Here, the system compares the voice patterns rather than words. Hence, the search of this system would be time consuming which would be more visible when conducted over a larger database.
US patent 283340 describes a spoken user interface for speech-enabled devices. It includes a processor and a set of software instructions that are executable by the processor and stored in nonvolatile memory. A user of the speech-enabled device is prompted to enter a voice tag associated with an entry

in a call history of the speech-enabled device. The call history includes lists of incoming and outgoing email messages, and incoming and outgoing email messages, and incoming and outgoing telephone calls. The user is prompted to enter a voice tag associated with a telephone number or email address in the call history after a user-selected number of telephone calls has been sent from the speech-enabled device to that telephone number, or has been sent from the telephone with that telephone number to the speech-enabled device, or after a user-selected number of email messages has been sent from the speech-enabled device to that email address, or has been sent from that email address to the speech-enabled device. The user may populate a phonebook of the speech-enabled device with email addresses by sending an email message to the speech-enabled device from a computer and including additional email addresses in the To: field and/or CC: field of the email message. Here, there is no searching technique involved.
US patent 200040254795 describes automated database assistance using a telephone for a speech based or text based multimedia communication mode. It describes a method for searching a database by inputting speech-based Queries. The invention helps in accessing a remote database. The invention enables a user to accept queries both by text and speech. The user is initially provided with an option to perform the search based on text or speech recognition. The results are generated in a ranked manner, with the best match being at the top. Here, the system does not offer features like auto-completion of words in the query nor the ability to adapt itself to a number of user's for new and confusing or out-of-vocabulary words.
None of the above patents offers a system for an intelligent search initiated and controlled by speech on any portable device. The search method described in this instant invention involves speech-query based on information that is stored in text. None of these patents discloses any technology, which possesses the capability of automatic self-learning over their life and to be able to recognize

new words or inform the user of a probable correction in his query. The systems that do require learning, have their learning based on a pre-defined set of text and not over all of their life and the text/speech encountered by them. So they are susceptible to confusing words (with silent letters) and alien-language words. Thus, there arises a need for a method that can search the data whose search keywords are not inside its dictionary. Manually searching data via keypad and necessary menu and navigation that is present in current systems is tedious and time consuming for user. Hence, a system to make file retrieval more efficient is required. Search as using the above-mentioned patents, would be time consuming because they compare the voice patterns rather than words. Also, such devices are not energy efficient since the recording or saving hardware for all such systems is always consuming energy even while not in use. The option of disabling the feature and cutting-off power to the hardware when it is not meant to be in use is absent. So there is a need for such a technique, which can perform an intelligent, multi-criteria, efficient search initiated and controlled by speech that also supports self-learning.
Thus, in order to simplify the search, it can be made speech based. Also, to make the search more effective, it can be performed using multiple criteria. In order to make it more user-friendly, features such as informing the user about a probable correction in his query and auto-completion of queries can be incorporated. To make searches more efficient, the capability of automatic self-learning to be able to recognize new words should also be implemented.
OBJECTS AND SUMMARY OF THE INVENTION
It is an object of the instant invention to obviate the above drawbacks and provide a method and system for speech-based multicriteria self-learning retrieval of files.

It is an object of the instant invention to complete or correct the user input if required by sensing when the user is unsure about the search string or is forgetting some part based on cues like a stop or detection of a lengthened vowel.
It is yet another object of the instant invention to perform a search based on the new criteria and/or search string on the existing search results or initiate a fresh search while refining the search results.
It is also an object of the instant invention to keep the valid criteria on which the search is performed not limited to voice tags and flexible to include any field as defined by the user.
It is further an object of the instant invention, to display the results along with their ranking in the order of their ranking.
It is also an object of the instant invention to refine the search by adjusting the tolerance levels of the search and setting the number of results to be retrieved.
It is yet another object of the instant invention to allow the input to be specified by providing both speech and text.
it is further an object of the instant invention to allow the search to be based on multiple criteria provided using speech and/or text.
It is an object of the instant invention to keep the speech inputs speaker independent.
It is further an object of the instant invention to inform the user in case of an unsuccessful search.

It is a!so an object of the instant invention to perform the search on a plurality of criterion.
It is also an object of the instant invention to match the search queries to a dynamically adjustable vocabulary.
It is further an object of the instant invention to provide suggestions and take feedback at both the input stage as well as the results stage.
To achieve the aforementioned objectives the instant invention provides a method and system for speech-based multicriteria self-learning retrieval of files comprising enabling the feature for searching a file within the memory of the electronic device, reading the settings configured for the search and valid criteria, setting the search modes to perform the search, accepting the input from the user indicating the search criteria, processing user entered criteria, setting the search criteria as per input received for the same, accepting the search string indicating values for search criteria, processing user entered search string, searching for files based on the user input in the specified criteria or voice tags as indicated by the user, displaying the results of the search in the order of their ranking, refining the search according to user input, making deductions to improve further searches and playing the file selected by the user.
Accordingly, the instant invention further provides a for a speech-based multi-criteria self-learning search for files comprising a user interface module to interact with the user to accept input and provide output a speech input engine forgetting the speech inputs from the user, a text input engine for getting the text inputs from the user, a speech recognition engine for recording and processing the speech inputs from the user, a text and voice command interpreter module to interpret and manage the interactions with the user and instruct the other modules based on the user inputs and internal state of the system, and a search

engine module to search the desired file based on the information extracted by the speech recognition engine
BRIEF DESCRIPTION OF THE DRAWINGS
The instant invention will now be described using the following diagrams:
• FIG 1 is a block diagram illustrating the system and working of the instant
invention.
• FIG 2a, 2b define the timing/flowchart depicting the basic operation of the
instant invention
• FIG 3 is a hierarchical chart showing the features offered by the instant
invention.
DETAILED DESCRIPTION OF THE INVENTION
The instant invention provides an intelligent, speech based search technique that can be implemented on any electronic handheld or portable device. The invention has been explained here with the help of an audio device. However, it is not restricted to this implementation. It can be used with any device after modifying the configuration settings.
For the instant invention, a portable auxiliary audio device is provided with a conventional headset or earphones along with a microphone, which are connected to the audio device electronically. The headset or earphones are used to listen to the songs stored in the device and the microphone is used to provide speech inputs to the device. It also consists of recording equipment to record the user-defined voice-tags for and associating with a particular data item, or it may be a part of that file. The user may record it. These voice tags can be played back using the speaker. The audio device has a storage device to store the

digital audio data. In an embodiment, the device has embedded chips that comprise instructions for the search.
Figure 1 shows the system for implementing the instant invention. The system includes a user interface module (101), a speech recognition module (102), a command interpreter (103) and a search engine (104) along with a display, speech input means and a playing means.
The user interface module (101) accepts a search string as a speech input from the user via the speech input means. In one embodiment, the speech input means include microphone stored on a chip in the audio device. It also enables the user to navigate or select the desired file from the ranked list of search results that is displayed on the screen of the audio device.
The speech recognition module (102) takes the speech inputs from the user interface module (101). The voice samples of all the commands are made available to the speech module (102). It recognizes the commands or criterion input by the use'r. Commands include the activation command, which is a lengthened vowel utterance, the initiation command, the inactivation command, commands for setting the tolerance level and commands for setting the number of results available to the user. The voice samples of these commands are made available to the speech module (102). It is intelligent enough to identify it even if it was unable to do so by the first utterance of that word by the user. If the same utterance for a given criterion or command is repeated then chances of Its recognition get better even in an outdoor environment. It also processes the queries. It enters special flags between the query at places where the user stuttered or paused for some time.
The command interpreter module (103) forwards the query to be searched, accepted from the speech module (102), to the search engine module (104). It also presents the search results to the user through the display means by

invoking the user interface module (101). It also manages the interface while deciding whether the search is to be made on voice tags or some other criteria. It informs the search engine module (104) about the tolerance levels.
The search engine module (104) searches for the file based on the given criteria with the tolerance levels, and informs the command interpreter module (103) if some files are found. It also tries to complete the query, if no file is found, so that it matches a stored keyword. It makes a deduction, based on whether a file is found or not, for the correctness or completeness of the query on speech inputs processed by the speech module (102).
The search results are then displayed on the audio device display. The user then may or may not select the desired file from the search file and play it through the speakers.
Figure 2 illustrates a flow chart for the search process. In step 201, the 'Song Search' feature is switched on to accept user input. The user speaks into the speech input means and inputs a search string. The user interface module (101) receives this input. The speech recognition module (102) further accepts it from the user interface model (101). Initially, the speech recognition module (102) processes these speech inputs for a match with the activation command. Once the activation command is detected, speech recognition module (102) informs the command interpreter module (103).
Next, in step 202, the settings are read and the search modes are set. The speech recognition module (102) identifies commands to set the tolerance levels and other settings. It can be specified here if certain keywords from criteria of a particular file are to be masked from being included in the search.
Now, in step 203, it is checked if the settings are set for initiating a new search.

In step 204, the user interface module receives the user input and the speech recognition module (102) processes it.
In step 205, the command interpreter module (103) instructs the speech recognition module (102) to look out for the search initiation command.
In step 206, on receiving the search initiation command, the speech recognition module (102) resets the variables to prepare for a new search. The search results list based on a criterion and the final search result list are set to empty.
In step 207, the user interface module (101) asks the user to enter search criteria on which he wants to search the file through the speech input means.
In step 208, the speech recognition module (102) checks if the entered criteria are valid.
If it is the entered criteria are not valid, then in step 209, the user interface module (101) reports the search status to the user and he is asked to re-select the criteria and enter a new query for them as speech input. Through the speech input means.
If the criteria are valid, then in step 210, the speech recognition module (102) receives the query from the user. It processes the next input as query till query de-marker input is received, it inserts special flags between the query at places where the user stuttered or paused for some time. These serve as hints to the search engine module (104) for query completion, correction and adjustment. The command interpreter module (103) forwards this query to the search engine module (104) that searches for the file based on the criterion. In case the speech inputs are voice tags, the search engine module (104) is informed and it searches the matching tag from the predefined tags that have been associated with each file. The command interpreter module 1(103) manages this interface.

The search engine module (104) conducts the search with the tolerance levels as communicated by the command interpreter module (103). It informs the command interpreter module (103) if some files are found. It also makes a deduction, based on whether a file has been found or not, for the correctness and completion of the query based on the speech inputs processed by the speech recognition module (102). The criterion is marked as used and search results list for criterion, is populated with the search results.
In step 211, the final search result list is produced. If there are results present from a previous search, then the results in this new list common with the existing list are retained in. If there has been no previous search, then is simply filled with the current search results.
In step 212, the search engine module (104) checks if no results have been found.
If no results have been found then in step 214, the search engine module (104) tries to complete the query so that it may match a stored keyword. If it is unable to do so, the user interface module (101) informs the user of this status and conveys the user the textual interpretation of the query. If the user feels that the textual interpretation is wrong, may be because of the incorrect speech inputs by the user or because of a new or confusing word that could not be mapped to its keyword, then he/she can ask the system to adjust the query to so that it may match a closely resembling stored keyword. The speech recognition module (102) then tries to construct a proper query by completing or possible correcting the user. In case the query does not yield any results for the given criterion but matches keywords belonging to a different criterion, the user is provided the feedback regarding a possible mismatch of the search query and the search string along with suggestions about the string matching some other criteria. This is followed by step 207, wherein the user is asked to enter chosen search criteria from the suggested list.

If some results are found, then in step 213 the search engine module (104) determines if a single result has been found.
If only a single result has been found, then in step 215 it is played without the user's seiection. The user's feedback is taken regarding whether the file being played is according to his desire. This is followed by step 202, wherein the settings are again read and the search modes are reset.
The search goes on till the search criteria are exhausted and/or the user does not want to give further inputs. If multiple results have been found, then in step 216 it is determined if all the search criteria have been exhausted.
If all the search criteria have been exhausted then in step 217, the user interface module (101) shows the search results along with their ranking to the user in the order of their ranking. The user interface module (101) enables the user to navigate and select the desired file from the ranked list. This feedback is utilized to generate better rankings specific to the user if the user deems this necessary.
If the search criteria haven't been exhausted then step 207 follows, wherein the user is asked to enter other search criteria.
In step 219, it is determined if the user has selected a file from the search results. If he has selected a file, then in step 220 it is played.
If he has not selected a file, then in step 218 the speech recognition module (102) determines if the user wants to refine the search. If the user wants to do so, then the search is refined according to user inputs. The user is given the option to adjust the tolerance level and the number of results that can be made available to the user for each search. Using appropriate commands after the

activation command has been entered does this. The tolerance level affects the precision and recall rates for the search. If the pre-defined search criteria are exhausted then the user can provide generic input for further refinement of search.
If the refinement does not increase the search results or the user does not ant to give further inputs, he can end the search. At the end of the search, the user may inactivate the module by providing the inactivation command. After this, every module apart from the speech recognition module (102) shall be put to sleep. The speech recognition module (102) shall again look for the activation command in the speech inputs received by it, following which it would inform the command interpreter module (103).
Once the search has ended, the system may query the user whether he/she wants the system to make certain deductions from the search. If the user deems fit, the system makes deductions regarding speech-to-text mapping mechanisms to improve the search of words similar to the one in the current search. Also, the learning gathered by the system can be made user specific profiles to reflect person-specific parameters such as dialects, pronunciations etc.
The user has also been given the option of turning off the power to his recording equipment when he decides not to use the speech feature and turn it on again as per his desire. If this feature is manually turned off in the settings, the whole software terminates and power to the recording equipment is turned off.
The present invention is not intended to be restricted to any particular form or arrangement, or any specific embodiment, or any specific use, disclosed herein, since the same may be modified in various particulars or relations without departing from the spirit or scope of the claimed invention hereinabove shown and described of which the apparatus or method shown is intended only for illustration and disclosure of an operative embodiment and not to show all of the

various forms or modifications in which this invention might be embodied or operated.

We claim:
1. A method for a speech-based multi-criteria self-learning search for files
comprising:
- enabling the feature for searching a file within the memory of the
electronic device
- reading the settings configured for the search and valid criteria
- setting the search modes to perform the search
- accepting the input from the user indicating the search criteria
- processing user entered criteria
- setting the search criteria as per input received for the same
- accepting the search string indicating values for search criteria
- processing user entered search string
searching for files based on the user input in the specified criteria or voice tags as indicated by the user
- displaying the results of the search in the order of their ranking
- refining the search accordirfg to user input
- making deductions to improve further searches
- playing the file selected by the user
2. A method as claimed in claim 1, wherein the step of processing the
speech input further comprises:
- recognizing the speech input using a language model
- identifying the criteria and the search string from the speech input
validating the speech input
3. A method as claimed in claim 1, wherein the step of refining the search
further comprises:

- marking search criteria on the basis of which the search was
carried out as used
- guiding the user to refine the search by providing suggestions to
the user regarding alternate search criteria and search strings
depending on the search results
- accepting user input to refine the search .
- performing a search based on the new search string and/or search
criteria

4. A method as claimed in ciaim 1, wherein the step of making deductions to
improve the search technique includes confirming the text input relating to
the speech input
5. A method as claimed in claim 1, wherein the step of making deductions to
improve the search technique includes adding new words to the
dynamically adjustable vocabulary
6. A method as claimed in claim 1, wherein the step of making deductions to
improve the search technique includes altering the dynamically adjustable
vocabulary to differentiate between similar sounding and confusing words
7. A method as claimed in claim 1, wherein the step of making deductions to
improve the search technique includes accepting feedback regarding
search results
8. A method as claimed in claim 1, wherein the user input is completed or
corrected if required by sensing when the user is unsure about the search
string or is forgetting some part based on cues like a stop or detection of a
lengthened vowel

9. A method as claimed in claim 1, wherein while refining the results, the
search based on the new criteria and/or search string may be performed
on the existing search results or a fresh search may be initiated
10. A method as claimed in claim 1, wherein the valid criteria on which the
search is performed is not limited to voice tags and can be any field as
defined by the user
11. The method as claimed in claim 1, wherein the results are displayed along
with their ranking in the order of their ranking
12.The method as claimed in claim 1, wherein the search can be refined by adjusting the tolerance levels of the search and setting the number of results to be retrieved-
13. A method as claimed in claim 12, wherein the tolerance levels control the precision of search and effectiveness and have an effect on the recall and precision of the system
14.A method as claimed in claim 1, wherein the input can be specified providing both speech and text.
15.A method as claimed in claim 1, wherein the search can be based on multiple criteria provided using speech and/or text
16. A method as claimed in claim 1, wherein the speech inputs are speaker
independent
17. A method as claimed in claim 1, wherein the it is not necessary to
associate unique user-defined voice tags with each file

18. A method as claimed in claim 1, wherein the user is informed in case of an
unsuccessful search
19. A method as claimed in claim 1, wherein the search can be performed on
a plurality of criterion
20. A method as claimed in claim 1, wherein the search queries are matched
to a dynamically adjustable vocabulary
21. A method as claimed in claim 1, wherein the criteria and query can be
intermingled within the speech input without the user having to specify
which part of the input is criteria and which is the search string
corresponding to it
22.A method as claimed in claim 1, wherein suggestions are provided and feedback is taken at both the input stage as well as the results stage
23. A system for a speech-based multi-criteria self-learning search for files comprising:
- a user interface module to interact with the user to accept input and
provide output a speech input engine for getting the speech inputs
from the user
a text input engine for getting the text inputs from the user
- a speech recognition engine for recording and processing the
speech inputs from the user
- a text and voice command interpreter module to interpret and
manage the interactions with the user and instruct the other
t modules based on the user inputs and internal state of the system
- a search engine module to search the desired file based on the
information extracted by the speech recognition engine

24.A system as claimed in claim 23, wherein the user interface module removes any humming, whistling and live or background music from the user input
25.A system as claimed in claim 23, which further includes means for saving the input audio data
26.A system as claimed in claim 23, which further includes means for recording the audio data to be used for processing the data
27.A system as claimed in claim 23, which further includes a dynamically adjustable vocabulary for processing the input search criteria and search strings and to enable searching from amongst a variable set of words
28.A system as claimed in claim 23, which further includes a means to associate any unique or non-unique voice or speech tags (in the form of pre-recorded speech samples) with each music file to be able to identify it when searching on the basis of speech inputs
29. A system as claimed in claim 28, wherein the speech samples are taken
from the user
30. A system as claimed in claim 21, wherein an option is provided to turn off
the power to the recording equipment.
31. A system as claimed in claim 21 which is capable of recognizing queries
from multiple users without the need of receiving training by recording the
user's speech as he recites a pre-defined document to the audio device
before first use

32. A system as claimed in claim 21 which is capable of automatically learning from user selections and refinements on search results to get better at recognizing confusing words from their pronunciations and produce search results as desired by the user

Documents

Application Documents

# Name Date
1 2417-del-2006-gpa.pdf 2011-08-21
1 2417-DEL-2006_EXAMREPORT.pdf 2016-06-30
2 Amended Form 1.pdf 2014-04-28
2 2417-del-2006-form-5.pdf 2011-08-21
3 Form 13_Address for service.pdf 2014-04-28
3 2417-DEL-2006-Form-3.pdf 2011-08-21
4 2417-DEL-2006-Form-1.pdf 2011-08-21
4 Relevant Documents.pdf 2014-04-28
5 2417-del-2006-correspondence-others.pdf 2011-08-21
5 2417-del-2006- abstract.pdf 2011-08-21
6 2417-del-2006- form-2.pdf 2011-08-21
6 2417-del-2006- claims.pdf 2011-08-21
7 2417-del-2006- drawings.pdf 2011-08-21
7 2417-del-2006- description (complete).pdf 2011-08-21
8 2417-del-2006- drawings.pdf 2011-08-21
8 2417-del-2006- description (complete).pdf 2011-08-21
9 2417-del-2006- form-2.pdf 2011-08-21
9 2417-del-2006- claims.pdf 2011-08-21
10 2417-del-2006- abstract.pdf 2011-08-21
10 2417-del-2006-correspondence-others.pdf 2011-08-21
11 2417-DEL-2006-Form-1.pdf 2011-08-21
11 Relevant Documents.pdf 2014-04-28
12 Form 13_Address for service.pdf 2014-04-28
12 2417-DEL-2006-Form-3.pdf 2011-08-21
13 Amended Form 1.pdf 2014-04-28
13 2417-del-2006-form-5.pdf 2011-08-21
14 2417-DEL-2006_EXAMREPORT.pdf 2016-06-30
14 2417-del-2006-gpa.pdf 2011-08-21