Abstract: The present invention relates to a method and system for predicting a database query elapsed response time while being transparent to underlying database system and hardware, at the stage of application development. The system and method relates to estimation of query elapsed response time for projected large size production environment at the stage of application development. The system processes one or more database query in one or more stages. The system further identifies factors affecting the database query elapsed response time and influenced by increase in database size. Moreover, the system measures impact of the identified factors at the processing stage for developing a model as a function of database parameters. The system further predicts the query elapsed response time by using the developed models. Figure 1
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
A SYSTEM AND METHOD FOR PREDICTING QUERY ELAPSED RESPONSE TIME TRANSPARENT TO DATABASE SYSTEM
Applicant
TATA Consultancy Services Limited A company Incorporated in India under The Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
FIELD OF THE INVENTION
The present invention in general relates to method and system for predicting query elapsed response time. More particularly, the invention relates to a method and system for predicting query elapsed response time transparent to database system.
BACKGROUND OF THE INVENTION
The start of inexpensive computation and growth in autonomic computation power has led to rise in large size database applications. The typical database applications used for banking and finance sector contains the data in trillions of size. The read and write queries have been used to get appropriate information and update the information to these huge database storage systems. These data intensive applications once launched in production environment cannot be changed without a down time. However the data size goes on increasing as the time passes. This may lead to impact the performance of application and there by the application may not hold the promised service level agreement further.
In a database application development, the testing of the queries is done on the application database which is a fraction of size of the real time database. It is difficult to create a large production database at the time of application development. And also it is difficult to arrange all the resources required for testing in real time environment. Even though the resources may be available, it will take long time to create and load the large size database.
There have been a number of solutions proposed in the prior art to calculate the cost of query execution. One of the solutions has disclosed an automatic analytical cost model which uses formulas cost systems applicable to different generic database. However this method takes into consideration of cost models for query optimization only. This model does not consider optimization of database and the component like subsystems storage and also remains silent about the impact of large size production data on the query retrieval or query response time.
Another technique has disclosed the calibration of logical cost for queries in heterogeneous database systems. This invention particularly speaks about cost calculation for database system working in network and talks only about the systems having cost optimization functionalities. So it is largely dependent on the type of database. Also this invention is not speaking about the impact on query response time with change in database size.
One of the inventions discloses time estimation for processing database query using a selected index. This technique is based on the stored execution history. Thus there is a need of storing the execution history.
Some of the cost optimization techniques are based on decorrelation of queries which includes transformation of query.
Thus most of the techniques disclosed in prior art predict the query cost based on the machine learning technique. Also the query optimization results will be limited with respect to specific type of database only. Some of the work is specific to the internal of the DB server such as type of DB server and hardware specifications. In these techniques the execution of query is done on database in application environment and however remains silent about execution in production database environment. There is no technique available which will consider the changing size of database to predict the response time of query during application development.
So there is a need of a method which will consider the various factors of the system which will affect the query response time during its execution. A method is required which is independent of the hardware and transparent to the database system. There is need of a system which will consider the increasing size of database and will consider the impact of it on the query performance. The system and method should be capable of optimizing the query in order to hold the service level agreements with the passage of time. The tool should be able to predict the query response time for projected large size production database at the time of application development.
OBJECTS OF THE INVENTION
It is the primary object of the invention to provide a system and method for predicting query elapsed response time transparent to database system.
It is another object of the invention to provide a system and method for predicting query elapsed response time for a projected production environment at the stage of application development.
It is yet another object of the invention to identify one or more factors affecting the database query response time and influenced by increase in database size at each of the processing stage.
It is yet another object of the invention to develop one or more models as a function of one or more database parameters for predicting query elapsed response time.
SUMMARY OF THE INVENTION
The present invention provides a method for predicting a database query elapsed response time while being transparent to underlying database system and hardware, for a projected production environment at the stage of application development. The method comprises of processing one or more database query, such that the processing is performed in one or more stages and identifying one or more factors affecting the database query response time and influenced by increase in database size at each of the processing stage. The method further comprises of measuring impact of the identified factors at each of the processing stage for developing one or more models as a function of one or more database parameters and predicting the database query elapsed response time by using the developed models.
The present invention provides a system for predicting a database query elapsed response time while being transparent to underlying database system and hardware for a projected production environment at the stage of application development. The system comprises of a relational database and a database query in the application
development environment and one or more processor configured to process the database query in one or more stages. The system further comprises of an identification module configured to identify one or more factors affecting the database query response time and influenced by increase in database size at each of the processing stage and a computation means configured to extrapolate the impact of the identified factors at each of the processing stage for developing one or more models as a function of one or more database parameters. The system further comprises of a prediction tool configured to predict the query elapsed response time by using the developed models.
BRIEF DESCRIPTION OF DRAWINGS
Figure 1 illustrates the system architecture in accordance with an embodiment of the invention.
Figure 2 illustrates the stages involved in query processing in accordance with an embodiment of the system.
Figure 3 illustrates the framework for query elapsed response time prediction in accordance with an embodiment of the system.
Figure 4 in an exemplary embodiment of the invention illustrates the model for Fetching unit.
Figure 5 in an exemplary embodiment of the invention illustrates the model for the Execution unit.
DETAILED DESCRIPTION
Some embodiments of this invention, illustrating its features, will now be discussed:
The words "comprising", "having", "containing", and "including", and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item
or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
It must also be noted that as used herein and in the appended claims, the singular forms "a", "an", and "the" include plural references unless the context clearly dictates otherwise. Although any systems, methods, apparatuses, and devices similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present invention, the preferred, systems and parts are now described. In the following description for the purpose of explanation and understanding reference has been made to numerous embodiments for which the intent is not to limit the scope of the invention.
One or more components of the invention are described as module for the understanding of the specification. For example, a module may include self-contained component in a hardware circuit comprising of logical gate, semiconductor device, integrated circuits or any other discrete component. The module may also be a part of any software programme executed by any hardware entity for example processor. The implementation of module as a software programme may include a set of logical instructions to be executed by the processor or any other hardware entity. Further a module may be incorporated with the set of instructions or a programme by means of an interface.
The disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms.
The present invention relates to a method and system for predicting a database query elapsed response time while being transparent to underlying database system and hardware, at the stage of application development. The system and method relates to estimation of query elapsed response^ time for projected large size production environment at the stage of application development. The system processes one or more database query in one or more stages. The system further identifies factors affecting the database query elapsed response time and influenced by increase in
database size. Moreover, the system measures impact of the identified factors at the processing stage for developing a model as a function of database parameters. The system further predicts the query elapsed response time by using the developed models.
In accordance with an embodiment, referring to figure 1, the system (100) comprises of a relational database (102) and a database query in the application development environment, one or more processor (104) which is configured to process the database query in one or more stages. The system further comprises of an identification module (106) configured to identify one or more factors affecting the database query elapsed response time and influenced by increase in database size at each of the processing stage, a computation means (108) configured to extrapolate the identified factors and a prediction tool (110) configured to predict the query elapsed response time.
In accordance with an embodiment, still referring to figure 1, a database used is a relational database; the query may include but is not limited to a SQL (Structured Query Language) query. For the purpose of understanding we will refer to the query as SQL query.
The system (100) further comprises of a processor (104) which is configured to process the database query. The query processing is done in one or more stages. In accordance with an embodiment, referring to figure 2, SQL query initiated through an application goes through three main stages during its processing - parsing, execution and fetching.
A. Parsing
In this phase a SQL query is parsed to check its syntax and run through the query optimizer to decide its path of execution. Time taken to parse a query depends on how it is structured, e.g. use of bind variables in Oracle reduces the number of hard parse and hence the elapsed time. The elapsed time at this stage is not dependent on
the size of the database; however the path chosen by the query optimizer may depend on size of the tables involved in the SQL query. The structure of a query plays a very important role here in deciding the path of execution and hence the type and number of operations, which in turn may affect the query response time. For example, use of hints may direct the query optimizer to use indexes or hash join which could speed up the query. Also, an absence of index may force the DB to do full scan of the database to answer the query which may lead to increase in elapsed time with increase in size of database.
B. Execution
Once a query is parsed it is ready for execution. During this phase, the query is executed which may involve a sequence of computations and fetching operations, from storage subsystem, overlapped with each other. However, contributions to query response time from both execution and fetching are disjoint. The execution phase primarily contributes towards the computations in the query. Therefore, they are modeled as two separate phases in query processing referring to Fig 3.
In execution phase, DB concurrency control unit and hardware platform have critical role to play in deciding the query response time. The path chosen by query optimizer as discussed in the above phase is executed in this phase. The type and number of operations to be executed contribute to the query elapsed time. The cost of these operations is dependent on the size of database. For example, the size of index depends on the size of database; the hash join operation will be dependent on the size of tables involved in the join etc.
Initially, the DB concurrency control mechanism decides whether the query can be scheduled for execution based on its conflict with other queries executing in parallel. This leads to waiting time for the query before it is scheduled for execution, which gets added up to the elapsed time. Probabilistically, larger the size of database, lesser is the chances for queries to conflict. In other words waiting time due to conflict may get reduced with increase in size of the database. However, if a query happens to
access large data sets, then its high execution time may increase the waiting time for the conflicting queries.
Once the query is scheduled for execution, it is up to the operating system to execute it either in parallel with other queries each on different processor (inter query parallelism), or it may execute single query on multiple processors (intra query parallelism), or pipelining or serial execution in case of single processor. Each of these modes of executions will have effect on query response time, however the choice of mode of execution is independent of the size of the database, and this will not change with growth of database.
C. Fetching
An execution of a query leads to retrieving and possibly modifications of data, (in case of DML queries), from the database. A record may be returned from the database server cache or it may be required to get fetched from the storage subsystem where the database is stored. The query elapsed time is small in the former case as compare to the later. This choice depends on the caching policy of the DB server as well as the size of system cache. This is independent of the size of database. However, probability of finding a requested record in cache is more for a small size database.
In other case, if the record is not in the cache, the record is fetched from the disk subsystem. A storage subsystem could be a single hard disk, by way of specific example, SAN or JBOD. The time elapsed in fetching a record will depend on the performance of the disk subsystem which will contribute to the total query response time. Accessing (reading or modifications) a record from the disk subsystem is independent of the size of the database- i.e. it does not change with increase in the database size.
The system (100) further comprises of an identification module (106) which is configured to identify one or more factors affecting the database query elapsed
response time and influenced by increase in database size at each of the processing stage. The factors affecting the database query response time includes but is not limited to design of the database query, database schema, database server, workload on the server, disk subsystem and hardware platform.
Two tier architecture of the system is considered where the application is hosted on the database server to avoid the time delays which may be introduced in the query result due to the query processing at the web server.
The response time for a query on database system depends on followings.
• Design of the query: use of hints, use of joins instead of sub query and many other techniques which may improve the query response time.
• Database schema: Use of indexes and de-normalization may affect the read query response time.
• Database server: Concurrency control techniques and query optimizer choices may affect the query response time. Several system level settings such as size of database cache, library cache and shared pool can directly impact the query performance.
• Workload on the server- number, size as well as type of transactions
• Disk Subsystem: Data access time from disk subsystem can affect the query response time.
• Hardware platform: Capacity of CPU, number of cores/processor, size of
memory guides the resources which could be available for DB server and hence may
impact the query performance.
The effect of the hardware platform and disk subsystem on query performance does not vary with the size of the database.
The system (100) further comprises of a computation means (108) which is configured to extrapolate the impact of the identified factors at each of the processing stage for developing one or more models as a function of one or more database parameters. Referring to figure 3, the one or more models may include but
is not limited to CPU system model, concurrent workload model, disk I/O subsystem model, execution unit model and fetching unit model.
In accordance with an embodiment, CPU System Model (304) models a CPU for number of cores, its processing speed and RAM size so that one knows how much time is taken by a specific CPU for a specific computation. Disk I/O System Model (310) is configured to model the Disk 10 system transparent to whether 10 is from local disk or from SAN. This component models the architecture of storage subsystem which could be hierarchical in nature. The model outputs the JO seek time and 10 transfer time for a given 10 system and 10 request.
In accordance with an embodiment, Concurrency Workload Model (306) models the behavior of concurrent workload on any given DB system. It models various concurrency control policies and algorithms. It outputs a waiting time a query may incur on a specified DB with specified volume and type of concurrent queries.
In accordance with an embodiment, Execution Unit Model (302) models the execution time a query may incur on a specific size of database during its processing on a DB. It takes inputs from CPU System Model (304) and Concurrent Workload Model (306) to derive the query execution time. This unit models the cost of basic components of query execution such as sorting, nested loop join, hash join etc.
In accordance with an embodiment, Fetching Unit Model (308) models the time taken by a query to retrieve data from database of a specific size. It takes inputs from Disk 10 system to calculate the data fetching time. This unit models the cost of fetching data for a query considering access from cache or from a hierarchy of storage subsystem.
The models are developed as a function of one or more database parameters. The database parameter further comprises of the database size and the database system configuration.
The system further comprises of a prediction tool (110) which actually predicts the total elapsed response time of a query taking inputs from Execution Unit Model (302) and Fetching Unit Model (308) for a specific schema of database and a particular size of database.
The database query response time is predicted in terms of size of the query result, concurrency control mechanism of the database server, query execution cost, number of processors in the database server, size of database cache and disk subsystem performance.
A query response time quantitatively could be function of the followings:
1. Size of the query result (expected number of rows * expected size of each row) which may be dependent on size of database.
2. Concurrency control mechanism of the DB server.
3. Query execution cost which may depend on the access path chosen by the query optimizer, size of server's various caches.
4. Number of processors in the DB server
5. Size of the Cache at DB server which is dependent on the hardware platform.
6. Disk subsystem performance.
As time progresses, both database size as well as transaction workload may increase on the system which can affect the query response time. The variation of query response with increase in the workload on the system can be modeled for a fixed size of database. The Concurrent Workload Model (306) can be plugged in this framework while doing the prediction.
A query with a specific design on a specific DB server with a particular schema may perceive best response time, however as database size grows that query design may no longer yields best results. Moreover, schema may require changes in terms of index creation etc. to promise the same performance for the query on the new large sized database. Various types of caches' sizes at DB server may need to be upgraded
to improve the response time of query on database of increased size. The prediction tool may be used to estimate the performance of a query in future. If it is not acceptable, the user has an option to modify the query design or change DB server settings to get desired performance.
Query processing is observed at fine granularity level to formulate a framework which shows the parameters which change with increase in database size and affect the query response time as well.
Extrapolation
Based on the phases of query processing as discussed above, each of the phases is modeled as function of size of database and used them for estimating a query response time.
In accordance with an embodiment, still referring to figure 3. framework for predicting query response time with increase in database size is explained.
The prediction tool (110) takes as input the SQL query under evaluation, the database schema, the database/tables size, current DB server settings and the infrastructure details which include hardware details such as memory size, number of cores etc. These inputs could be specified in a language which could be interpreted by the tool. The tool shall examine the query and estimate the given query's response time consulting the various models as shown in figure 3. Each of these units could be a mathematical model or may be based on experimental results.
Still with reference to Figure 2, query processing time could be formulated as: QRT = Parsing Time + WaitTime + Execution Time + Fetching Time where, Wait_Time is the extra time contributed in a query execution due to presence of other queries on the system.
It is assumed that all those time constants which are invariant to the size of database are ignored so that only those components which will contribute to the elapsed time
with growth of database size are considered. By way of a specific example, consider a database of size 'N'.
Parsing time is independent of the database size. So this component is ignored.
Wait_Time may differ for different database servers due to their different concurrency control algorithm and policies. It primarily depends on the number of the conflicting queries, Q, and the type of queries. For heterogeneous mix of queries, it may be function of database size as well. Therefore, this part of the query execution time shall be provided by the database concurrency control model as shown in Fig 3, and represented as W (Q, N).
Execution time depends on two parameters - system architecture (number of processors, memory size etc.) and execution path given by the parsing unit. The former is independent of the size of the database, however later depends highly on the database size. The expected number of rows to be returned by the DB Server increases with database size- the rate of increase will depend on the model of query optimizer. Let's assume, Execution Time= K(N) + constants, where K(N) models the number of operations with their costs and the cardinality set of the data processed by the query.
Fetching time depends on the DB cache size and the disk subsystem. Let's assume, Fetching Time = F(N) * (H(N)*Tcache + (1-H(N)) Tstoragesubsystem), where F(N) returns number of fetches which depends on the path executed at the execution phase and the size of database. It is assumed that each fetch returns same number of bytes and all fetches together corresponds to the cardinality set of the query result. !H(N)' is the hit ratio for DB cache which decreases with increase in the number of concurrent transactions as well as with size of database for uniform data access, Tcache is the average time taken to retrieve a record from cache, Tstoragesubsystem is average time taken to access a record from the storage subsystem.
Therefore,
QRT = K(N) + W(Q,N) + F(N) * {H(N)*Tcache + (l-H(N))*Tstoragesubsystem}
When the required data is fed into these various models, we can get approximate query elapsed response time and can observe its behavior with increase in the size of the database.
BEST MODE/EXAMPLE FOR WORKING OF THE INVENTION
The system and method illustrated for predicting a query response time for a database in an application development environment may be illustrated by working example stated in the following paragraph; the process is not restricted to the said example only:
Let us consider a specific DB server, Oracle lOg [16], as a case study to show the applicability of the framework. Oracle, in dedicated mode, has many monitoring processes and 'oracleorcl' for processing the queries. The 'Orel' process will do all the job of parsing and executing. Fetching will be done by a separate process called DBWR.
To estimate a query response time on Oracle lOg, according to the proposed framework, we need model for functions K (N), Q, F (N), H (N), Tcache and Tstoragesubsystem.
Tcache and Tstoragesubsystem can be obtained from the inputted infrastructure details, the 'CPU system' and 'Disk I/O subsystem' models. These are in terms of time per fetch. 'Q' is dependent on the concurrency control policy of the DB server and may be obtained from the DB server model. A DB server using strong serialization approach may have high value for 'Q', however DB server using semantics of applications may work with weak serialization and decrease the number conflicting transactions, hence low value for 'Q\ This can be obtained from Concurrent Workload Model (306).
Referring to figure 4, F (N) is dependent on size of the query result. It returns the number of fetches required to produce the query result. It takes certain inputs, such as DB schema. In case of oracle DB schema can be imported in the model system using 'dump' facility. A SQL read may only read from the storage subsystem, while a SQL update may perform both read and write on the storage subsystem. For simplicity, we assume only SQL read so that in later case the time taken will be just twice of the former one. The challenge here is to estimate the number of rows returned by the query which will be independent of the access path chosen by the execution unit, but dependent on the size of database. Oracle has a tool called Explain plan [16], which gives an estimate of number of rows returned by using information about DB schema and size of various tables in the database. Another tool, TkProf, gives an exact number of fetches performed by the query during its execution. Since we do not have mechanism of executing query on the future database, we can do measurements by executing a test workload and build a model using TkProf. Both these tools can be used together, as shown in figure 4, to build F (N) which can return size of query result on inputting future size of database.
Referring to figure 5, K (N) is the expected execution cost of the query which depends on the DB server system parameters such as cache etc, query size result and the cost of the access path which includes operations performed for executing the query. Please note that the cost of the operations performed after fetching such as 'DISTINCT' etc. is counted in K (N) only. The query size result can be provided by F (N). As discussed before, in Oracle lOg, Explain Plans can give expected access path followed by the query for execution and TkProf can provide actual execution cost of the query. The breakup of the execution cost in terms of cache reads and physical reads can be obtained by running AWR [16] reports in Oracle. Some of the operations' cost such as join etc. depends on the DB server settings (e.g. higher PGATARGETAREA, lower cost of sort operation), which can be obtained from AWR reports. These three tools can be used together, as shown in Figure 5, to model K(N).
As discussed above, in Oracle, H (N) and W (Q, N) can be modeled by doing measurements using AWR [16] on the given system.
The preceding description has been presented with reference to various embodiments of the invention. Persons skilled in the art and technology to which this invention pertains will appreciate that alterations and changes in the described structures and methods of operation can be practiced without meaningfully departing from the principle, spirit and scope of this invention.
WE CLAIM:
1. A method for predicting a database query elapsed response time while being
transparent to underlying database system and hardware, for a projected
production environment at the stage of application development, the method
comprising;
processing one or more database query, such that the processing is performed in one or more stages;
identifying one or more factors affecting the database query response time and influenced by increase in database size at each of the processing stage;
measuring impact of the identified factors at each of the processing stage for developing one or more models as a function of one or more database parameters; and
predicting the database query elapsed response time by using the developed models.
2. The method as claimed in claim 1, wherein one or more stages further comprises of parsing, executing and fetching.
3. The method as claimed in claim 1, wherein the factors affecting the database query response time includes but is not limited to design of the database query, database schema, database server, workload on the server, disk subsystem and hardware platform.
4. The method as claimed in claim 1, wherein the one or more models may include but is not limited to CPU system model, concurrent workload model, disk I/O subsystem model, execution unit model and fetching unit model.
5. The method as claimed in claim 1, wherein the database parameter further comprises of the database size and the database system configuration.
6. The method as claimed in claim 1, wherein the database query elapsed response time is predicted in terms of size of the query result, concurrency control mechanism of the database server, query execution cost, number of processors in the database server, size of database cache and disk subsystem performance.
7. A system for predicting a database query elapsed response time while being transparent to underlying database system and hardware for a projected production environment at the stage of application development, the system comprising:
a relational database and a database query in the application development environment;
one or more processor configured to process the database query in one or more stages;
an identification module configured to identify one or more factors affecting the database query response time and influenced by increase in database size at each of the processing stage;
a computation means configured to extrapolate the impact of the identified factors at each of the processing stage for developing one or more models as a function of one or more database parameters; and
a prediction tool configured to predict the query elapsed response time by using the developed models.
8. The system as claimed in claim 7, wherein the one or more stages further comprises of parsing, executing and fetching.
9. The system as claimed in claim 7, wherein the factors affecting the query response time may include but is not limited to design of the database query, database schema, database server, workload on the server, disk subsystem and hardware platform.
10. The system as claimed in claim 7, wherein the one or more models may include but is not limited to CPU system model, concurrent workload model, disk I/O subsystem model, execution unit model and fetching unit model.
11. The system as claimed in claim 7, wherein the database parameter further comprises of the database size and the database system configuration,
12. The system as claimed in claim 7, wherein the database query elapsed response time is predicted in terms of size of the query result, concurrency control mechanism of the database server, query execution cost, number of processors in the database server, size of database cache and disk subsystem performance.
| # | Name | Date |
|---|---|---|
| 1 | 1528-MUM-2012-Written submissions and relevant documents (MANDATORY) [15-10-2019(online)].pdf | 2019-10-15 |
| 1 | ABSTRACT1.jpg | 2018-08-11 |
| 2 | 1528-MUM-2012-FORM 3.pdf | 2018-08-11 |
| 2 | 1528-MUM-2012-HearingNoticeLetter30-09-2019.pdf | 2019-09-30 |
| 3 | 1528-MUM-2012-FORM-26 [25-09-2019(online)].pdf | 2019-09-25 |
| 3 | 1528-MUM-2012-FORM 26(29-5-2012).pdf | 2018-08-11 |
| 4 | 1528-MUM-2012-FORM 2.pdf | 2018-08-11 |
| 4 | 1528-MUM-2012-CLAIMS [26-04-2019(online)].pdf | 2019-04-26 |
| 5 | 1528-MUM-2012-FORM 2(TITLE PAGE).pdf | 2018-08-11 |
| 5 | 1528-MUM-2012-COMPLETE SPECIFICATION [26-04-2019(online)].pdf | 2019-04-26 |
| 6 | 1528-MUM-2012-FORM 18.pdf | 2018-08-11 |
| 6 | 1528-MUM-2012-FER_SER_REPLY [26-04-2019(online)].pdf | 2019-04-26 |
| 7 | 1528-MUM-2012-OTHERS [26-04-2019(online)].pdf | 2019-04-26 |
| 7 | 1528-MUM-2012-FORM 1.pdf | 2018-08-11 |
| 8 | 1528-MUM-2012-FORM 4(ii) [25-03-2019(online)].pdf | 2019-03-25 |
| 8 | 1528-MUM-2012-FORM 1(28-6-2012).pdf | 2018-08-11 |
| 9 | 1528-MUM-2012-DRAWING.pdf | 2018-08-11 |
| 9 | 1528-MUM-2012-FER.pdf | 2018-09-26 |
| 10 | 1528-MUM-2012-ABSTRACT.pdf | 2018-08-11 |
| 10 | 1528-MUM-2012-DESCRIPTION(COMPLETE).pdf | 2018-08-11 |
| 11 | 1528-MUM-2012-CLAIMS.pdf | 2018-08-11 |
| 11 | 1528-MUM-2012-CORRESPONDENCE.pdf | 2018-08-11 |
| 12 | 1528-MUM-2012-CORRESPONDENCE(28-6-2012).pdf | 2018-08-11 |
| 12 | 1528-MUM-2012-CORRESPONDENCE(29-5-2012).pdf | 2018-08-11 |
| 13 | 1528-MUM-2012-CORRESPONDENCE(28-6-2012).pdf | 2018-08-11 |
| 13 | 1528-MUM-2012-CORRESPONDENCE(29-5-2012).pdf | 2018-08-11 |
| 14 | 1528-MUM-2012-CLAIMS.pdf | 2018-08-11 |
| 14 | 1528-MUM-2012-CORRESPONDENCE.pdf | 2018-08-11 |
| 15 | 1528-MUM-2012-ABSTRACT.pdf | 2018-08-11 |
| 15 | 1528-MUM-2012-DESCRIPTION(COMPLETE).pdf | 2018-08-11 |
| 16 | 1528-MUM-2012-DRAWING.pdf | 2018-08-11 |
| 16 | 1528-MUM-2012-FER.pdf | 2018-09-26 |
| 17 | 1528-MUM-2012-FORM 4(ii) [25-03-2019(online)].pdf | 2019-03-25 |
| 17 | 1528-MUM-2012-FORM 1(28-6-2012).pdf | 2018-08-11 |
| 18 | 1528-MUM-2012-OTHERS [26-04-2019(online)].pdf | 2019-04-26 |
| 18 | 1528-MUM-2012-FORM 1.pdf | 2018-08-11 |
| 19 | 1528-MUM-2012-FORM 18.pdf | 2018-08-11 |
| 19 | 1528-MUM-2012-FER_SER_REPLY [26-04-2019(online)].pdf | 2019-04-26 |
| 20 | 1528-MUM-2012-FORM 2(TITLE PAGE).pdf | 2018-08-11 |
| 20 | 1528-MUM-2012-COMPLETE SPECIFICATION [26-04-2019(online)].pdf | 2019-04-26 |
| 21 | 1528-MUM-2012-FORM 2.pdf | 2018-08-11 |
| 21 | 1528-MUM-2012-CLAIMS [26-04-2019(online)].pdf | 2019-04-26 |
| 22 | 1528-MUM-2012-FORM-26 [25-09-2019(online)].pdf | 2019-09-25 |
| 22 | 1528-MUM-2012-FORM 26(29-5-2012).pdf | 2018-08-11 |
| 23 | 1528-MUM-2012-HearingNoticeLetter30-09-2019.pdf | 2019-09-30 |
| 23 | 1528-MUM-2012-FORM 3.pdf | 2018-08-11 |
| 24 | ABSTRACT1.jpg | 2018-08-11 |
| 24 | 1528-MUM-2012-Written submissions and relevant documents (MANDATORY) [15-10-2019(online)].pdf | 2019-10-15 |
| 1 | 1528MUM2012_18-09-2018.pdf |