Abstract: N/A
FORM-2
THE PATENT ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
PROVISIONAL SPECIFICATION
(See section 10 and Rule 13)
"CAPACITY SIZING METHODOLOGY"
TATA CONSULTANCY SERVICES LIMITED
An Indian Company
of Bombay House, 24, Sir Homi Mody Street,
Mumbai-400 001, Maharashtra, India.
THE FOLLOWING SPECIFICATION DESCRIBES THE INVENTION
Field of Invention
The invention relates to the field of capacity sizing methodology.
Background of Invention:
Capacity planning is always a challenge for large enterprises due to changing workloads, applications, technologies, and infrastructure. Future capacity is estimated using existing capacity as a base.
Very often it is required to budget the hardware cost required for new applications well in advance of implementation, typically during requirements gathering and solution architecture formulation. IT service companies face several challenges in sizing for new applications. The sizing needs to be done for multiple technology options e.g. Websphere-DB2, Weblogic-Oracle, on Windows or AIX or HPUX or Solaris. The number of combinations of stacks is quite significant which makes it impractical to size separately for each stack combination. Moreover, for several stack components vendors may not have a well published sizing methodology.
Capacity Sizing Tools are widely used in the industry for extrapolating sizing requirements based on measurements taken from existing systems.
PCT application number 2004/111850 discloses a method and computer system for providing a cost estimate for sizing a computer
2
system. This method determines the cost estimate by determining the hardware requirements for each object of selected application program and estimating of corresponding costs. This invention does not deal in sizing for variety of stacks.
US Patent No. 6542854 discloses a method and mechanism for sizing a hardware system. The workload is modelled into a set of generic system activities. Suitable hardware systems or components are selected by analysing the workload and hardware profiles in terms of the generic system activities.
US Patent No. 6,542,893 shows a database sizer which calculates the total mass storage requirements for a database table including database storage requirements, application and software requirements, system table requirements, scratch and sort requirements, log file requirements, and growth requirements. The calculated storage requirements include separately output operating system and application software space requirements, system table space requirements, scratch and sort space requirements and log file space requirements.
Quick Sizer (Sizing tool from SAP) assists in selecting the hardware and system platform that meets specific business requirements. Quick Sizer provides online, up-to-date sizing based on business-oriented figures, such as the number of users or expected number of business processes and documents.
3
Other sizing tools like IBM System Workload Estimator, HyPerformix IPS Capacity Manager and the like are also used in industry for extrapolating sizing requirements based on measurements taken from existing systems. For capacity planning of new applications, technology vendors provide their own tools.
The above said tools cannot be implemented on different kinds of stacks with remote wide area networks. These tools are useful only when the project has already been implemented and sizing requirements are to be extrapolated based on measurements from production.
Thus, there was a need for a tool which uses general purpose technique for Network bandwidth, J2EE and RDBMS sizing, which is vendor neutral.
Objects of invention:
The object of this invention is to provide a tool for sizing of infrastructure in a computer environment.
Another object of this invention is to provide sizing with minimum inputs.
4
One more object of this invention is to provide a sizing tool which can be used for projects which use any kind of technology stack.
One more object of this invention is to provide sizing tool which uses general purpose technique for sizing and is independent of the vendor of the hardware used for the project.
Yet another object of this invention is to provide a sizing tool which is easy to use.
Still one more object of this invention is to provide a sizing tool which is efficient and accurate.
Summary of the Invention:
The tool in accordance with this invention envisages a means for capacity sizing of the infrastructure in a computer environment, specifically network bandwidth, CPU capacity required for J2EE application server, CPU capacity required for RDBMS server, and storage required for RDBMS.
In accordance with one practical embodiment of this invention, the sizing tool is based on a standard reference architecture which is common in IT enterprise systems. The enterprise has a data centre with employees spread across N branches. Branch k has a network link of bk Kbps (Kilo-bits per second) for k=l, 2... N. The total pipe in
5
to the data centre is of bandwidth BDC Mbps. The applications at the data centre are hosted on a set of web servers, application servers, and database servers, and the enterprise data resides on external storage accessed through a Storage Area Network.
In accordance with one practical embodiment of this invention the sizing is provided for a given business workload and for given response time and capacity utilization targets. Business workload inputs are collected for:
1. Online transaction processing
2. Batch processing
3. Reports
4. External Interfaces
5. Data Volumes
6. Users
Online transaction information is collected in the form of business interactions or use cases. External interface requirements are similar to transaction requirements. In the case of data volumes, the list of business entities, their sizes, their volumes as on date, growth rate, and data retention period are collected. User information is collected in terms of types of users, number of registered users, number of concurrent users, think times, and mapping of transactions to users.
In accordance with one practical embodiment of this invention, the user enters inputs into worksheets designated for input collection in Excel.
6
In accordance with one practical embodiment of this invention, the sizing is done in four dimensions: CPU capacity, RAM required, Storage requirements and network bandwidth.
Sizing for CPUs comprises Database server sizing, Application server sizing and Web server sizing.
Database server sizing
For online transactions their database complexity is captured in terms of the number of database reads, writes, and updates. The complexity is normalized to TPC-C complexity by using the 'magic number' of 1 TPC-C = 3 writes. This 'magic number' has been empirically arrived at over a number of years of practice.
The transaction throughputs are known from the inputs, and using the complexity per transaction in terms of TPC-C, the tpmC required per transaction is derived. These are added to get the total tpmC. Communication overheads are added to this number typically 40%. The tpmC is inflated to the utilization levels required by the user of the tool.
The CPU sizing for the database is thus done in tpmC. This is independent of the type of CPU, the type of database technology, and the operating system. By looking up vendor ratings at the TPC-C site (www.tpc.org) or by using internally published tpmC ratings of
7
vendors, one can easily arrive at the number of CPUs required for the database server.
Application server sizing
For online transactions or reports, their application server complexity is captured in terms of the number of basic operations. While we describe the approach in the context of J2EE applications, the methodology extends to even .NET or other technologies.
In the context of J2EE applications the Java operations possible are calls to Servlets, Entity Beans, Message Driven Beans (MDB), JDBC drivers, Stateless Session Beans, Stateful Session Beans, JMS, and calls to messaging products such as MQ. The JDBD, JMS and messaging calls can be done in bulk or one at a time.
First each of these operations is converted in terms of the cost of the lightest weight operation or the servlet call. The user is given the facility to specify conversion factors for the conversion. If servlet has a cost of 1, then entity beans have a cost of 3, MDB a cost of 2, JDBC a cost of 4, Stateless Session Beans cost of 2, Stateful Session Beans cost of 3, JMS cost of 2, MQ calls cost of 3, and bulk options have half the cost. These defaults are arrived at using a mix of benchmarks and real life data collection.
The conversion factors help in arriving at the overall cost of a transaction which is then normalized by the magic number 20, to
8
arrive at the number of jAppServer Operations per transaction. Multiplying this by the transaction rate, we arrive at the JOPS per transaction. As in the case of database servers, communication costs and utilization factors are considered to get the final JOPS.
Thus the server sizing in a method is independent of the hardware, platform or OS. The SPEC site (www.spec.org) provides JOPS ratings of vendor hardware on a given platform on a given OS. This in term is used to determine the number of CPUs required at the application
server.
The application and database server sizing are for the typical IT application that is not compute intensive, and wherein the cost of communication is significant compared to the cost of computation.
Web server sizing
Majority of the hardware and licensing cost goes in to database and application servers; we use a detailed methodology for sizing them. In the case of web servers which consume much less hardware and have very low licensing costs, instead thumb rules that work well in practice are used. The thumb rule that is used in this invention is to support 150 pages/sec per 20,000 tpmC of CPU capacity in the web server. This is cross validated with number of user sessions per CPU, which is available from popular benchmarks such as SPEC Web2005.
Sizing for RAM:
9
Memory sizing is again done using thumb rules. For lighter weight processing such as at web servers or only when servlets are in use, we go in for 1GB per CPU. For moderate usage which is typical 2GB per CPU is taken, and only for higher end processing is 4GB or higher taken.
Sizing for Storage:
The business data volumes specified by the tool user is taken, along with growth rate, and retention period and size of data, to first arrive at the minimum space required in the storage. To this we add overheads for indexes, RAID, logs, row space expansion, and disk utilization. The user can change these settings. However, in practice the disk space required is typically 4 to 6 times the minimum required to store the usable data.
After computing the space requirements, the IO requirements are considered. Using transaction rates, report rates, and batch processing rates as well as the I/Os required for each of these, the I/Os per second (IOPS) is computed.
Network Sizing
Network sizing for throughput is done at the data centre level by taking the transaction rates and reporting rates, and having the tool user input the bytes in and bytes out per transaction and report. These are multiplied, to it protocol overhead is added, and utilization factor is considered to get the final network bandwidth.
10
At the branch level the throughputs are computed based on number of users per branch and the transactions they do. The same approach is adopted as for the data centre network bandwidth sizing.
Sizing for Response Times
Once the sizing is done for throughput, we get the minimum capacity required for the system. Thereafter, the sizing is tuned for response times. As of now, the tool let's the user specify a breakup of response time across network, web, application, and database servers.
To size for response times, a closed system model is taken, and except for the resource being sized every other resource is treated as a delay centre with delay equal to its response time target. If we are sizing for the network, we consider a closed system model with number of users specified by the workload, the network resource which will have queuing, and a fixed delay centre of outside the network (equal to the response time specifications for the web, app, and database server).
In accordance with one practical embodiment of this invention, the response time for given capacity is computed using Approximate Mean Value Analysis Technique. If the response time satisfies the given target, return the given capacity as the final capacity required. Keep doubling the capacity until response time target is met. Thus, upper bound and a lower bound on capacity (half of the upper bound) are obtained. A binary search is done within these bounds until
11
response time target is met for a given capacity C. C as the final capacity required.
In accordance with one practical embodiment of this invention, the tool has been implemented in Microsoft Excel to make it widely usable across our enterprise. In the services industry we have a number of proposals to size for, and quite often the sizing expert needs to provide capacity estimates within the scope of stringent deadlines. The use of excel facilitates portability as well as usability.
In accordance with one practical embodiment of this invention, the first sheet in the tool is labeled "Constants" since it contains all the data required for conversion factors, tpmC ratings, JOPS ratings, server utilization and communication overhead constants.
In accordance with a practical embodiment of this invention, the user specifies the workload. The database and application server sizing is done by specifying inputs for transaction complexity in their respective worksheets. The tool in accordance with this invention automatically copies the transactions names and rates from the workload sheet to the DB and Application server sheets.
In accordance with one practical embodiment of this invention the throughput computations are all done using simple formulas in Excel. The response time sizing as given by the algorithm is implemented in Visual Basic Macros in the Excel worksheets.
12
Brief Description of Drawing:
The invention will now be described with reference to the
accompanying drawing, in which
Figure 1 is a block diagram of the reference system architecture in
accordance with this invention.
Figure 2 is a flow diagram for algorithm for sizing of response time in
accordance with this invention.
Detailed Description of Drawings:
According to this invention there is provided a tool for capacity sizing.
Figure 1 illustrates the reference architecture of the system in accordance with this invention. The system architecture comprises a data centre with employees spread across N branches. Branch 'k' has a network link of bk Kbps (Kilo bits per sec) for k=l, 2 ... N. The total pipe in to the data centre is of bandwidth BDC Mbps. The applications at the database servers, and the enterprise data resides on external storage accessed through a Storage Area Network.
The sizing provided by the tool in accordance with this invention is for a given business workload and for given response time and
13
capacity utilization targets. Business workload inputs are collected for:
1. Online transaction processing
2. Batch processing
3. Reports
4. External Interfaces
5. Data Volumes
6. Users
Online transaction information is collected in the form of business interactions or use cases. External interface requirements are similar to transaction requirements. In the case of data volumes, the list of business entities, their sizes, their volumes as on date, growth rate, and data retention period are collected. User information is collected in terms of types of users, number of registered users, number of concurrent users, think times, and mapping of transactions to users.
The sizing is done in four dimensions: CPU capacity sizing; RAM sizing; Storage requirements; and network bandwidth.
Sizing for CPUs comprises Database server sizing, Application server sizing and Web server sizing.
Database server sizing
For online transactions their database complexity is captured in terms of the number of database reads, writes, and updates. The complexity is normalized to TPC-C complexity by using the 'magic number' of 1
14
TPC-C = 3 writes. This 'magic number' has been empirically arrived at over a number of years of practice.
The transaction throughputs are known from the inputs, and using the complexity per transaction in terms of TPC-C, the tpmC required per transaction is derived. These are added to get the total tpmC. Communication overheads are added to this number typically 40%. The tpmC is inflated to the utilization levels required by the user of the tool.
The CPU sizing for the database is thus done in tpmC. This is independent of the type of CPU, the type of database technology, and the operating system. By looking up vendor ratings at the TPC-C site or by using internally published tpmC ratings of vendors, one can easily arrive at the number of CPUs required for the database server.
Application server sizing
For online transactions or reports, their application server complexity
is captured in terms of the number of basic operations. While we
describe the approach in the context of J2EE applications, the
methodology extends to even .NET or other technologies.
First each of these operations is converted in terms of the cost of the
lightest weight operation or the servlet call. The user is given the
facility to specify conversion factors for the conversion. If servlet has
a cost of 1, then entity beans have a cost of 3, MDB a cost of 2, JDBC
a cost of 4, Stateless Session Beans cost of 2, Stateful Session Beans
15
cost of 3, JMS cost of 2, MQ calls cost of 3, and bulk options have half the cost. These defaults are arrived at using a mix of benchmarks and real life data collection.
The conversion factors help in arriving at the overall cost of a transaction which is then normalized by the magic number 20, to arrive at the number of jAppServer Operations per transaction. Multiplying this by the transaction rate, we arrive at the JOPS per transaction. As in the case of database servers, communication costs and utilization factors are considered to get the final JOPS.
Thus the server sizing in a method is independent of the hardware, platform or OS. The SPEC site (www.spec.org) provides JOPS ratings of vendor hardware on a given platform on a given OS. This in term is used to determine the number of CPUs required at the application server.
In the context of J2EE applications the Java operations possible are calls to Servlets, Entity Beans, Message Driven Beans, JDBC drivers, Stateless Session Beans, Stateful Session Beans, JMS, and calls to messaging products such as MQ. The JDBD, JMS and messaging calls can be done in bulk or one at a time.
The application and database server sizing are for the typical IT application that is not compute intensive, and wherein the cost of communication is significant compared to the cost of computation.
16
Web server sizing
Majority of the hardware and licensing cost goes in to database and application servers we use a detailed methodology for sizing them. In the case of web servers which consume much less hardware and have very low licensing costs, instead thumb rules that work well in practice are used. The thumb rule that is used in this invention is to support 150 pages/sec per 20,000 tpmC of CPU capacity in the web server. This is cross validated with number of user sessions per CPU, which is available from popular benchmarks such as SPEC Web2005.
Sizing for RAM:
Memory sizing is again done using thumb rules. For lighter weight
processing such as at web servers or only when servlets are in use, we
go in for 1GB per CPU. For moderate usage which is typical 2GB per
CPU is taken, and only for higher end processing is 4GB or higher
taken.
Sizing for Storage:
The business data volumes specified by the tool user is taken, along with growth rate, and retention period and size of data, to first arrive at the minimum space required in the storage. To this we add overheads for indexes, RAID, logs, row space expansion, and disk utilization. The user can change these settings. However, in practice the disk space required is typically 4 to 6 times the minimum required to store the usable data.
17
After computing the space requirements, the IO requirements are considered. Using transaction rates, report rates, and batch processing rates as well as the I/Os required for each of these, the I/Os per second (IOPS) is computed.
Network Sizing
Network sizing for throughput is done at the data centre level by taking the transaction rates and reporting rates, and having the tool user input the bytes in and bytes out per transaction and report. These are multiplied, to it protocol overhead is added, and utilization factor is considered to get the final network bandwidth.
At the branch level the throughputs are computed based on number of users per branch and the transactions they do. The same approach is adopted as for the data centre network bandwidth sizing.
Sizing for Response Times
Once the sizing is done for throughput, we get the minimum capacity required for the system. Thereafter, the sizing is tuned for response times. As of now, the tool let's the user specify a breakup of response time across network, web, application, and database servers.
To size for response times, a closed system model is taken, and except for the resource being sized every other resource is treated as a delay centre with delay equal to its response time target. If we are sizing for
18
the network, we consider a closed system model with number of users specified by the workload, the network resource which will have queuing, and a fixed delay centre outside the network (equaling response time specifications for web, app, and database server).
Figure 2 illustrates the algorithm for response time estimation. The response time for given capacity is computed using Approximate Mean Value Analysis Technique (1). If the response time satisfies the given target, return the given capacity as the final capacity required (2). Keep doubling the capacity until response time target is met (3). Thus, upper bound and a lower bound on capacity (half of the upper bound) are obtained. A binary search is done within these bounds until response time target is met for a given capacity C. C as the final capacity required (4).
The tool in accordance with this invention has been implemented in Microsoft Excel to make it widely usable across our enterprise. In the services industry we have a number of proposals to size for, and quite often the sizing expert needs to provide capacity estimates within the scope of stringent deadlines. The use of excel facilitates portability as well as usability.
The first sheet in the tool is labeled "Constants" since it contains all the data required for conversion factors, tpmC ratings, JOPS ratings, server utilization and communication overhead constants.
19
The user specifies the workload. The database and application server sizing is done by specifying inputs for transaction complexity in their respective worksheets. The tool in accordance with this invention automatically copies the transactions names and rates from the workload sheet to the DB and Application server sheets. The throughput computations are all done using simple formulas in Excel. The response time sizing as given by the algorithm in Fig 2 is implemented in Visual Basic Macros in the Excel worksheets.
While considerable emphasis has been placed herein on the particular features of a tool for sizing of infrastructure in a computer environment, the improvisation with regards to it, it will be appreciated that various modifications can be made, and that many changes can be made in the preferred embodiment without departing from the principles of the invention. These and other modifications in the nature of the invention or the preferred embodiments will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the invention and not as a limitation.
20
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 1334-MUM-2007-RELEVANT DOCUMENTS [28-09-2023(online)].pdf | 2023-09-28 |
| 1 | 1334-MUM-2007_EXAMREPORT.pdf | 2018-08-09 |
| 2 | 1334-MUM-2007-REPLY TO EXAMINATION REPORT(13-7-2015).pdf | 2018-08-09 |
| 2 | 1334-MUM-2007-RELEVANT DOCUMENTS [26-09-2022(online)].pdf | 2022-09-26 |
| 3 | 1334-MUM-2007-RELEVANT DOCUMENTS [30-09-2021(online)].pdf | 2021-09-30 |
| 3 | 1334-mum-2007-form-3.pdf | 2018-08-09 |
| 4 | 1334-MUM-2007-RELEVANT DOCUMENTS [29-03-2020(online)].pdf | 2020-03-29 |
| 4 | 1334-mum-2007-form-26.pdf | 2018-08-09 |
| 5 | 1334-MUM-2007-ORIGINAL UR 6(1A) FORM 26-210619.pdf | 2019-07-16 |
| 5 | 1334-mum-2007-form-2.pdf | 2018-08-09 |
| 6 | 1334-MUM-2007-IntimationOfGrant24-06-2019.pdf | 2019-06-24 |
| 7 | 1334-MUM-2007-PatentCertificate24-06-2019.pdf | 2019-06-24 |
| 7 | 1334-mum-2007-form-1.pdf | 2018-08-09 |
| 8 | 1334-MUM-2007-Response to office action (Mandatory) [10-06-2019(online)].pdf | 2019-06-10 |
| 8 | 1334-MUM-2007-FORM 5(9-7-2008).pdf | 2018-08-09 |
| 9 | 1334-MUM-2007-Response to office action (Mandatory) [07-06-2019(online)].pdf | 2019-06-07 |
| 9 | 1334-MUM-2007-FORM 5(13-7-2015).pdf | 2018-08-09 |
| 10 | 1334-MUM-2007-FORM 3(13-7-2015).pdf | 2018-08-09 |
| 10 | 1334-MUM-2007-ORIGINAL UR 6(1A) FORM 26-191118.pdf | 2019-05-02 |
| 11 | 1334-MUM-2007-FORM 26(13-7-2015).pdf | 2018-08-09 |
| 11 | 1334-MUM-2007-Written submissions and relevant documents (MANDATORY) [30-11-2018(online)].pdf | 2018-11-30 |
| 12 | 1334-MUM-2007-FORM 2(TITLE PAGE)-(PROVISIONAL)-(12-7-2007).pdf | 2018-08-09 |
| 12 | 1334-MUM-2007-FORM-26 [15-11-2018(online)].pdf | 2018-11-15 |
| 13 | 1334-MUM-2007-FORM 2(TITLE PAGE)-(9-7-2008).pdf | 2018-08-09 |
| 13 | 1334-MUM-2007-HearingNoticeLetter.pdf | 2018-10-25 |
| 14 | 1334-MUM-2007-ABSTRACT(9-7-2008).pdf | 2018-08-09 |
| 14 | 1334-MUM-2007-FORM 2(TITLE PAGE)-(13-7-2015).pdf | 2018-08-09 |
| 15 | 1334-MUM-2007-CLAIMS(9-7-2008).pdf | 2018-08-09 |
| 15 | 1334-mum-2007-form 2(9-7-2008).pdf | 2018-08-09 |
| 16 | 1334-MUM-2007-CLAIMS(AMENDED)-(13-7-2015).pdf | 2018-08-09 |
| 16 | 1334-MUM-2007-FORM 18(16-4-2009).pdf | 2018-08-09 |
| 17 | 1334-MUM-2007-CLAIMS(MARKED COPY)-(13-7-2015).pdf | 2018-08-09 |
| 17 | 1334-mum-2007-form 13(16-4-2009).pdf | 2018-08-09 |
| 18 | 1334-MUM-2007-FORM 1(27-7-2007).pdf | 2018-08-09 |
| 18 | 1334-MUM-2007-CORRESPONDENCE(16-4-2009).pdf | 2018-08-09 |
| 19 | 1334-MUM-2007-FORM 1(13-7-2015).pdf | 2018-08-09 |
| 19 | 1334-MUM-2007-CORRESPONDENCE(8-7-2008).pdf | 2018-08-09 |
| 20 | 1334-MUM-2007-CORRESPONDENCE(IPO)-(FER)-(14-7-2014).pdf | 2018-08-09 |
| 20 | 1334-mum-2007-drawings.pdf | 2018-08-09 |
| 21 | 1334-mum-2007-correspondence-received.pdf | 2018-08-09 |
| 21 | 1334-MUM-2007-DRAWING(9-7-2008).pdf | 2018-08-09 |
| 22 | 1334-mum-2007-description (provisional).pdf | 2018-08-09 |
| 22 | 1334-MUM-2007-DRAWING(12-7-2007).pdf | 2018-08-09 |
| 23 | 1334-MUM-2007-DESCRIPTION(COMPLETE)-(9-7-2008).pdf | 2018-08-09 |
| 24 | 1334-mum-2007-description (provisional).pdf | 2018-08-09 |
| 24 | 1334-MUM-2007-DRAWING(12-7-2007).pdf | 2018-08-09 |
| 25 | 1334-MUM-2007-DRAWING(9-7-2008).pdf | 2018-08-09 |
| 25 | 1334-mum-2007-correspondence-received.pdf | 2018-08-09 |
| 26 | 1334-MUM-2007-CORRESPONDENCE(IPO)-(FER)-(14-7-2014).pdf | 2018-08-09 |
| 26 | 1334-mum-2007-drawings.pdf | 2018-08-09 |
| 27 | 1334-MUM-2007-CORRESPONDENCE(8-7-2008).pdf | 2018-08-09 |
| 27 | 1334-MUM-2007-FORM 1(13-7-2015).pdf | 2018-08-09 |
| 28 | 1334-MUM-2007-CORRESPONDENCE(16-4-2009).pdf | 2018-08-09 |
| 28 | 1334-MUM-2007-FORM 1(27-7-2007).pdf | 2018-08-09 |
| 29 | 1334-MUM-2007-CLAIMS(MARKED COPY)-(13-7-2015).pdf | 2018-08-09 |
| 29 | 1334-mum-2007-form 13(16-4-2009).pdf | 2018-08-09 |
| 30 | 1334-MUM-2007-CLAIMS(AMENDED)-(13-7-2015).pdf | 2018-08-09 |
| 30 | 1334-MUM-2007-FORM 18(16-4-2009).pdf | 2018-08-09 |
| 31 | 1334-MUM-2007-CLAIMS(9-7-2008).pdf | 2018-08-09 |
| 31 | 1334-mum-2007-form 2(9-7-2008).pdf | 2018-08-09 |
| 32 | 1334-MUM-2007-ABSTRACT(9-7-2008).pdf | 2018-08-09 |
| 32 | 1334-MUM-2007-FORM 2(TITLE PAGE)-(13-7-2015).pdf | 2018-08-09 |
| 33 | 1334-MUM-2007-FORM 2(TITLE PAGE)-(9-7-2008).pdf | 2018-08-09 |
| 33 | 1334-MUM-2007-HearingNoticeLetter.pdf | 2018-10-25 |
| 34 | 1334-MUM-2007-FORM 2(TITLE PAGE)-(PROVISIONAL)-(12-7-2007).pdf | 2018-08-09 |
| 34 | 1334-MUM-2007-FORM-26 [15-11-2018(online)].pdf | 2018-11-15 |
| 35 | 1334-MUM-2007-FORM 26(13-7-2015).pdf | 2018-08-09 |
| 35 | 1334-MUM-2007-Written submissions and relevant documents (MANDATORY) [30-11-2018(online)].pdf | 2018-11-30 |
| 36 | 1334-MUM-2007-ORIGINAL UR 6(1A) FORM 26-191118.pdf | 2019-05-02 |
| 36 | 1334-MUM-2007-FORM 3(13-7-2015).pdf | 2018-08-09 |
| 37 | 1334-MUM-2007-FORM 5(13-7-2015).pdf | 2018-08-09 |
| 37 | 1334-MUM-2007-Response to office action (Mandatory) [07-06-2019(online)].pdf | 2019-06-07 |
| 38 | 1334-MUM-2007-FORM 5(9-7-2008).pdf | 2018-08-09 |
| 38 | 1334-MUM-2007-Response to office action (Mandatory) [10-06-2019(online)].pdf | 2019-06-10 |
| 39 | 1334-mum-2007-form-1.pdf | 2018-08-09 |
| 39 | 1334-MUM-2007-PatentCertificate24-06-2019.pdf | 2019-06-24 |
| 40 | 1334-MUM-2007-IntimationOfGrant24-06-2019.pdf | 2019-06-24 |
| 41 | 1334-mum-2007-form-2.pdf | 2018-08-09 |
| 41 | 1334-MUM-2007-ORIGINAL UR 6(1A) FORM 26-210619.pdf | 2019-07-16 |
| 42 | 1334-MUM-2007-RELEVANT DOCUMENTS [29-03-2020(online)].pdf | 2020-03-29 |
| 42 | 1334-mum-2007-form-26.pdf | 2018-08-09 |
| 43 | 1334-MUM-2007-RELEVANT DOCUMENTS [30-09-2021(online)].pdf | 2021-09-30 |
| 43 | 1334-mum-2007-form-3.pdf | 2018-08-09 |
| 44 | 1334-MUM-2007-REPLY TO EXAMINATION REPORT(13-7-2015).pdf | 2018-08-09 |
| 44 | 1334-MUM-2007-RELEVANT DOCUMENTS [26-09-2022(online)].pdf | 2022-09-26 |
| 45 | 1334-MUM-2007_EXAMREPORT.pdf | 2018-08-09 |
| 45 | 1334-MUM-2007-RELEVANT DOCUMENTS [28-09-2023(online)].pdf | 2023-09-28 |