Abstract: A non intrusive method to monitor on-line transaction processing systems comprising identifying application transactions based on packet data captured on the network, determining the application transaction response time, segregating the said transaction response time into processing time and network response time, determining the time spent by the client in sending a request message to the server, segregating the said time into client and network time and server delay.
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(Section 10; rule 13)
1. TITLE OF THE INVENTION:
"A Non Intrusive System to Monitor On-Line Transaction Processing Systems"
2. APPLICANT (S)
(a)Name :Tata consultancy Services Ltd.
(b)Nationality : An Indian Company.
(c)Address :Air India Building, 11th floor, Nariman Point, Mumbai - 400 021
3.PREAMBLE TO THE DESCRIPTION
The following specification describes the invention.
FIELD OF INVENTION
The present invention relates to monitor on-line transaction processing systems.
The Present Invention provides a transaction analyzer that may be used to analyze and report the performance of online transaction processing systems (OLTP). The present invention relates to analyzing the communication traces between to determine transaction response times and communication overheads for a single server in the system and correlate transactions across multiple/ plurality of servers that form part of a single end-user system interaction. The Present Invention can run on different flavors of the Windows operating system.
DEFINITIONS:
As used in this specification the following words are generally intended to have a meaning as set forth below, except to the extent that the context in which they are used indicate otherwise.
"Data Packet" - format in which data is transmitted over a network. A packet contains the data itself as well as addresses, error checking, and other information necessary to ensure the packet arrives intact at its intended destination.
"TCP/IP" - The Internet protocol suite is the set of communications protocols that implement the protocol stack on which the Internet and most commercial networks run. It is sometimes called the TCP/IP protocol suite, after the two most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP).
"Non-intrusive Application"- In the process of measurement, there is no overhead whatsoever to the application whose performance is being monitored by Present invention. In traditional hub based Ethernet Systems a packet on the network is received by all hosts on the same network. The packet is accepted by the host it is destined to and other hosts discard it. However any "other" host can enable the promiscuous mode in its network card, so that it can also read packets that are not destined to it. System of the present invention enables the promiscuous mode on the network card. So if system of the present invention and the application server reside on the same Ethernet hub network, system of the present invention can see any packet destined to and originating from the server. In a switched Ethernet
2
Network port mirroring needs to be enabled to allow system of the present invention to capture packets destined to and originating from the application server. System of the present invention does not do any active communication over the network - i.e. it does not send any packets on the network. It also does not have any component or agent running on any of the application servers. That is why system of the present invention is non-intrusive.
BACKGROUND OF THE INVENTION:
In the state of art, many separate, incompatible, complicated, and often unsatisfactory tools are required to perform the tasks required for managing interconnected intelligent systems. Existing management and planning tools and methodologies for such systems suffer from at least one of the following current shortcomings:
1. Require user to take a trace or multiple traces (snapshot of the network and computing system over a given time period) as a basis of analysis;
2. Require network or end devices to perform calculations and store their results for subsequent retrieval or periodic reporting of this information;
3. Require clock synchronization for centralized coordination and analysis of the trace and/or stored data;
4. Analyze network and intelligent processor system components on an individual basis, and not as a whole;
5. require user knowledge and input of the configuration, customization, and capacity of the various computer and network components (e.g., processors, adapters, buses, internal and external storage, input/output microprocessors, channels, and local and wide area links), which may be based upon manufacturers' or suppliers' claims that are erroneous or not applicable to the users' environment; and, moreover, in internet, business-to-business, and pervasive computing connections, a subset of the components of such connections may be owned or controlled by more than one organization, so that access to performance, configuration, and other management information typically used for performance evaluation, planning, and troubleshooting may be inaccessible for entire subsets of the system considered as a whole;
6. Require user knowledge and input of current system and network customization (e.g., tuning parameters);
3
With regard to network performance, users and managers of networks frequently use TCP/IP pings (i.e., architected network echo packets) to check the availability of a target resource and the network connecting to it. In addition, ping programs commonly report the ping's round trip time, and user network managers can get a feel for the "usual" amount of time a ping should take between stations A and B on their network. Typically, the ping function provides one way and two way transfers. In one way pings, a transmitter sends a packet to an echo server device which discards the packet and returns a time stamp to the server. In two way pings, the echo server returns the packet with the time stamp.
Some network and processor evaluation systems which send test packets across the network require that the evaluator have knowledge of the processor configuration and of the capacity of the individual processor components, have knowledge of the network topology, require that special proprietary code be installed in the processors and in intermediate network devices, and do not use queuing theory or provide an analytic evaluation of the test results.
Also, systems which employ queuing-theory-based evaluations of network and processor systems require that the evaluator have knowledge of the network topology, require storage and retrieval of data from intermediate network devices, require capture and analysis of network and processor traces that are depictions of the network and processors at a given time, require knowledge of the detailed configuration and customization of all processor and network devices, require knowledge of the capacity of each intermediate device and device interface, and require intensive preparation to set up and use.
PRIOR ART AND ITS LIMITATIONS
Majority of the applications in prior art are based on multiple servers collaborating together to serve the end-user. Each server specializes in its own kind of service such as business logic, database functions, authentication etc. A set of the servers in the system also provide user interface functions. E.g. Web based systems used for e-business applications. But these involve large response times and less productivity.
One of the techniques usually applied to identify the bottlenecks, is the use of timestamps in the application code. However, in a production environment it can be very expensive and in many cases, impossible. Also, after an application is deployed it becomes very difficult to alter
4
or add the changes to the code. Analyzing resource utilization data is also commonly employed but it is not effective.
In another prior art the work presented by Mogul is based on a black box approach to problem solving. The techniques presented in this paper use both black box and white box methods to help speed up the identification of the bottleneck servers for a selected end-user system transaction. They differ in implementation. They track J2EE method calls and report the time spent in each method. The time spent in JDBC calls point to the time spent in database servers. With this information it is possible to get a breakup of time spent in the application server and the database server, the main processing components of web based systems. Examples are Performa-Sure from Quest and HP meter. But the limitation of these products is that they work only on J2EE based systems and are therefore not application independent.
The present invention differs from yet another prior art, i.e. the OPNET ACE tool. The present invention is different from this tool in the sense that the OPNET tool parses the application messages and hence may not work for every application. It uses a protocol decode engine developed by sniffer Technologies. The present invention on the other hand does not parse application messages and is hence application independent.
REFERENCES:
US 5,845,117 discloses a system in which the functions of Start, commit and abort in a transaction are managed by a task manager. When a certain transaction locks a certain resource, this information is registered in a lock manager. Accordingly, when a transaction requests for gaining a resource, the lock manager can determine that the resource is already locked, if any, by another transaction. In such a case, the transaction should wait for the termination of the other transaction, so that this information is registered in a wait-for-graph table. A deadlock detector determines whether the deadlock is caused according to the registered information in the wait-for-graph table.
In US 6,292,488 A computer system is capable of recovering from a deadlock using communication gateway devices, such as a bridges, which each use a deadlock recovery mechanism is disclosed. Rather than avoid deadlocks through constant monitoring of the communications path, the bridge allows the deadlock to occur. The recovery mechanisms of
5
the bridges control the resolution of the deadlock. In one embodiment, the recovery mechanism within each bridge causes the local device which controls its bridge to disconnect. Additionally, the bridges terminate their requests for control of each other, thereby breaking the deadlock and allowing communications to resume. In another embodiment, the recovery mechanism within each bridge terminates the bridge's request for control of the other bridge. Additionally, the recovery mechanisms cause the bridges to become idle in accordance with a time delay value. The bridge with the shorter delay becomes active first and takes control of the communication path, thereby breaking the deadlock.
In US 6,597,907 a method for detecting a deadlocked resource condition in a pool of shared resources is disclosed. Deadlocked resource condition in a pool of shared resources is detected by measuring a characteristic of resource utilization over a predefined time interval and comparing the measured characteristic in accordance with a predicted statistical relationship. If the measured resource utilization is inconsistent with the predicted statistical relationship, a deadlocked resource condition is determined to have occurred.
In US 6,721,765 a database system providing improved methods for asynchronous logging of transactions is described. Log records are created describing changes to a database made by a transaction. When a command committing changes to the database for the transaction is received, a logging request is placed in a queue. An asynchronous logging service removes requests from the queue and transfers log records from the transaction to a shared cache. The shared cache stores log records before they are written to the transaction log. The logging service writes log pages containing log records for the transaction from the cache to the transaction log. After all log pages in the cache for the transaction have been written to the transaction log, changes to the database made by the transaction can be committed to the database.
In US 6,738,872 A remote resource management system is provided for managing resources in a symmetrical multiprocessing environment having a plurality of clusters of symmetric multiprocessors each of which provides interfaces between cluster nodes of the symmetric multiprocessor system with a local interface and an interface controller. One or more remote storage controllers each have a local interface controller and a local-to-remote data bus. A remote fetch controller is responsible for processing data accesses across the
6
clusters and a remote store controller is responsible for processing data accesses across the clusters. These controllers work in conjunction to provide a deadlock avoidance system for preventing hangs
In US 6,885,641 is disclosed a System and method for monitoring performance, analyzing capacity and utilization, and planning capacity for networks and intelligent, network-connected processes is provided.
Therefore the object of the present invention is to monitor applications with a workload of several hundred concurrent connections.
Another object of the present invention is to determine transaction response times and accurately breakup them into processing time and network response time.
Another object of the present invention is to determine the request send delay caused from the Client to the Server and the isolate the server contribution to the delay.
Yet another object of the present invention is to segregate the response time into network time and the time taken up by the application server for processing.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention is explained by referring to the following drawings:
FIGURE 1 is a block diagram showing architecture of the present Invention.
FIGURE 2 is a detailed block diagram of the Packet Dumper.
FIGURE 3 is a detailed block diagram of the Packet Analyzer which analyzes the packet and prints the transactions and statistics.
DETAILED DESCRIPTION OF THE INVENTION
Embodiments of the present invention are described herein in the context of a system of computers, servers, and software. Those of ordinary skill in the art will realize that the following detailed description of the present invention is illustrative only and is not intended to be in any way limiting. Other embodiments of the present invention will readily suggest themselves to such skilled persons having the benefit of this disclosure. Reference will now be
7
made in detail to implementations of the present invention as illustrated in the accompanying drawings. The same reference indicators will be used throughout the drawings and the following detailed description to refer to the same or like parts.
In the interest of clarity, not all of the routine features of the implementations described herein are shown and described. It will, of course, be appreciated that in the development of any such actual implementation, numerous implementation-specific decisions must be made in order to achieve the developer's specific goals, such as compliance with application- and business-related constraints, and that these specific goals will vary from one implementation to another and from one developer to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking of engineering for those of ordinary skill in the art having the benefit of this disclosure.
FIGURE 1 is a block diagram showing architecture of the present Invention. In order to ensure that the performance of critical systems, applications and websites are monitored to precision, it is necessary to have a monitoring tool that is able to collect application packets from the network, non-intrusively. The application monitoring tool captures packets at a high rate. The tool is capable of analyzing data and provide a detailed view of application performance. System of the present invention has two main components:
> Packet Dumper
> Packet Analyzer
The Packet Dumper (101)
The Packet Dumper (101) passively listens to the network interface. The Packet Dumper takes the application details namely the server IP address and port, as inputs. Every incoming packet is then matched to see whether it belongs to the application based on this input. Every matching packet is then dumped into a file. A new such file is created & the current file is closed every two minutes or whenever the file size exceeds a certain specified limit whichever is earlier. The location of the current file is then pushed into a queue (103).
8
The Binary Packet Files (102)
The binary packet files store the all the application related packets that have been captured by the Packet dumper (101) thread. These files then serve as input for the packet analyzer (104) thread. They can also be saved by a user who has captured packet data from a remote environment.
The In Memory First in First out (FIFO) queue (103)
The in memory FIFO queue is used to hold information regarding packet dump files read for analysis but not yet processed by the Analyzer (104). When the Packet Dumper (101) thread closes a packet dump file, the information regarding its location is en-queued at the head of this queue. The Analyzer dequeues information from the queue and reads the location of the next packet dump file to be processed.
The Analyzer Thread (104)
The Analyzer thread determines the next Packet Dump file to be processed from the in. memory FIFO queue (103). The Analyzer maintains state variables for every open TCP connection in the form of a connection data structure. The Analyzer then opens the packet dump file. For each packet encountered the Analyzer thread will try to match it to an existing connection. If no connection structure exists for the connection a new connection structure is allocated and the current packet is then processed against that connection. The state variables of that connection are updated accordingly. Based on the current state of the connection and the packet type it is classified as a request packet or a reply packet. A transaction data structure is created when the first request packet is encountered after making a new connection or after reply packet of a previous transaction. The transaction is printed after it is determined that the last reply packet belonging to the transaction has been sent. This happens when the TCP connection is torn down or a request packet belonging to the same connection is encountered. The transactions encountered in every Packet dump file are then written into a corresponding output file.
The Output & Statistics Files (105)
The Analyzer (104) prints the analyzed information per transaction in the transaction data output files. There is one output file per packet dump file. Each row in the packet dump file corresponds to performance details like response time of a single transaction. The Statistics
9
File is also printed by the Analyzer (104). Each row in the Statistics file contains aggregate application statistics like active connections, requests, replies,-"bytes in and bytes out. Consecutive rows in the Statistics file correspond to consecutive time intervals of a given duration.
The Application (106)
The application is a TCP/IP server which is being monitored by system of the present invention. As requests are serviced by the application the packets belonging to each transaction are captured by the Packet Dumper (101) off the network interface, which is then stored to a Packet dump file.
FIGURE 2 is a detailed block diagram of the Packet Dumper.
Packet Sniffer (201)
The Packet Sniffer monitors the network in a promiscuous mode. In this mode every packet that is sent on the local network is captured by the packet sniffer in memory. The Packet sniffer then checks the packet headers to see if the captured packet belongs to the application being monitored. This is done by checking the following fields in the packet header
• The Source IP address
• The Destination IP address
• The Source Port
• The Destination Port
If the packet belongs to the application being monitored, it is passed to the Packet Writer
Packet Writer (202)
The Packet writer appends the current packet contents to the current packet file opened for
writing. The packet writer closes the current file when either of the following conditions holds
true which ever is earlier
The current file size exceeds a certain specified size
A certain specified time limit is exceeded after the file has been opened
10
When the current file is closed a new packet dump file is opened for saving the subsequently captured packets. At the same time the Packet communicator informed about the currently closed file.
Packet Communicator (203)
The Packet communicator receives the most recently closed packet dump file from the Packet Writer. The Packet communicator prepares a message for the Analyzer. This message indicates that the packet dump file is ready for analysis. This message is then queued into the in memory First in First out queue.
FIGURE 3 is a detailed block diagram of the Packet Analyzer which analyzes the packet and prints the transactions and statistics
The Packet dump Reader (301)
The Packet dump reader dequeues the packet dump file message from the in-memory First in First Out queue. It determines the location of the next packet dump file from the message. The packet dump file is read packet by packet. Each packet is then passed on to the Connection State Manager one after the other. After all the packets have been read, the next message is dequeued from the queue.
Conection State Manager (302)
The connection state manger maintains a hash table of open TCP connections sensed on the network of the application being monitored. Every entry in this hash table is a connection data structure which contains the state variables of the connection. For every packet supplied by the Packet dump reader the connection state manager retrieves the packet header information. Based on this information the corresponding connection data is retrieved from the hash table. If there is no matching connection in the hash table for the current packet a new connection data structure is created and inserted in the hash table based on current packet header information. The new connection structure is created in the following conditions
• The current packet is a fresh connection request packet from the client.
• The current packet is a non zero TCP payload packet destined to the application server The connection data structure either retrieved from the hash .table or created new is then passed on to the State Machine event Handler.
11
State machine Event Handler (303)
The State Machine Event Handler takes actions based on the current state of the TCP connection as accessed from the connection data structure and the type of the current packet. A connection is classified into one of the following main states as classified by the event handler
• Connection Establishment State - In this state the connection is going thru a 3 way handshake required for opening a TCP connection
• Requesting state - In this state the client is sending packets of the request message to the server
• Responding state - In this state the server is sending packets of the response message to the client
The logic used by the state machine event handler to identify a request and response packet is as follows.
• Any TCP packet with non zero payload destined to the server is considered to be part of the request message.
• Any TCP packet with non zero payload originating from the server is considered to be part of the response message.
Action is taken based on the combination of state and event. The relevant timestamps are recorded in a transaction data structure which is part of the current connection data structure. A transaction is defined a combination of a request message and the following response message on the same TCP connection. The following details are recorded in the transaction data structure
• Transaction Start Time: Timestamp of the first request packet.
• Request Send Time: Timestamp of last request packet - Timestamp of first request packet.
• Server Reaction Time: Timestamp of first reply packet - Timestamp of last request packet.
• Server Response Time: Timestamp of last reply packet - Timestamp of last request packet
• Application Bytes In: The sum of payload sizes of all request packets belonging to the transaction
12
• Application Bytes Out: The sum of payload sizes of all response packets belonging to the transaction
• Round Trip Time: TS of TCP acknowledgement of last outstanding response packet from the client - TS of last relevant response packet.
• Average round trip time: Average of round trip times measured in the responding state
• Standard deviation of round trip time: Standard deviation of round trip times measured in the responding state.
• Network overhead: Network overhead is the sum of times elapsed during data transfer phases in the responding state. A data transfer phase is identified as time during which there is at least one response packet unacknowledged by the client. A data transfer phase is exited when the client acknowledges all outstanding response packets.
• Client Network Delay: Client Network delay is the sum of times elapsed during idle phases in the Requesting state. An idle phase is identified as the time during which there is no request packet unacknowledged by the server. Another term called Server delay during request upload can be defined as Request send time - Client Network Delay.
• Client window zero: Client window zero is the sum of times elapsed during all client delay phases of the transaction in the responding state. A client delay phase is identified as the time starting from when a client sends a TCP window update packet with a window size of zero to when the client sends a TCP window update packet with a non zero receive window size.
• Unacknowledged Bytes In: is the sum of request packet payload sizes that have not been acknowledged by the server when entering in the responding state from the requesting state.
• Unacknowledged Bytes Out: is the sum of response packet payload sizes that have not been acknowledged by the client when entering the requesting state from the responding state.
When a transaction is complete the State machine event handler passes the transaction data
structure to the Transaction writer for printing.
The event handler also collects aggregate data for consecutive time intervals such as:
13
• Total No of unique active connections - An active connection is one in which there has been a TCP packet with non zero payload captured for that connection in the relevant interval.
• No of Requests - Total number of incoming requests during the relevant interval.
• No of replies - Total number of replies sent during the relevant interval.
• Total Bytes In - Sum of sizes of all the packets destined to the application server during the relevant interval across all connections
• Total Bytes out - Sum of sizes of all the packets originating from the application server during the relevant interval across all connections
14
At the end of the interval the state event handler passes the collected aggregate interval data to the Transaction writer. Working of the state machine handler (312) can be shown by means of the following flow Chart:
Connection Hash Table (304)
The connection hash table stores the connection data structures related to every open TCP connection on the application server being monitored. The entries are stored a hash for optimal searching and insertion of connection data structures. This table is managed by the Connection State Manager. The fields used by the connection manager to index the connection hash table are
• Client IP address
• Client Port
• Server IP address
• Server Port
Transaction Writer (305)
The transaction writer has the following functions
• Append the transaction data passed by the Connection State Event Handler to the currently opened transaction data output file for writing. This output file is closed and a new one opened for writing when the Packet Dump Reader finishes reading the current packet dump file and opens a next packet dump file to be read. In other words there is one transaction data output file for every packet dump file
• Append the aggregate statistics for the current interval passed by the Connection state event handler to the application statistics output file.
15
The time factor in the server process can be ascertained as under: One Client Server Transaction
A transaction is defined as request message combined with the following response message on the same TCP connection. The request and response messages themselves will consist of several packets themselves. As an example shown in the figure above the request message consists of two request packets sent by the client to the server. The response message consists of six response packets from the server to the client. In the figure it is implied that all the packets belong to the same TCP connection between the client and the server. Any TCP packet with a payload greater than zero is classified as a
1. Request packet if the packet originated from the client and is destined to the server.
2. Response packet if the packet originated from the server and is destined to the client after at least 1 request packet has been observed on the TCP connection.
16
Request Send Time
Request Send time is the time taken by the client to send a request message to the server. It is measured as the time elapsed starting from the first request packet being received at the server to the time at which the last request packet arrived at the server on the same TCP connection. By this definition the request send time for a request message with only one packet is zero. In the above figure the width of the coloured box denotes the request send time.
17
Percentage Client Network Delay
Percentage Client Network Delay - We shall first define server wait phase. A server wait phase occurs during the time a connection is in the requesting state. A connection is said to be in the wait phase when it has acknowledged all request packets it receives which are sent by the client. The width of one coloured box in the above example diagram denotes the time spent waiting by the server in the server wait phase. The sum of times spent in the server wait phases of the one requesting state is called the client network delay. This is illustrated in the diagram above. The percentage client network delay is the percentage contribution of the client network delay to the request send time. The server contribution to the request send time can be defined as request send time minus client network delay.
18
Server Reaction Time
Server Reaction time is the time taken by the client to send a react to the client request message. It is measured as the time elapsed starting from the last request packet being received at the server to the time at which the first response packet was sent by the server on the same TCP connection. In the above figure the width of the coloured box denotes the server reaction time.
19
Server Response Time
Server Response time is the time taken by the client to send a react to the client request message. It is measured as the time elapsed starting from the last request packet being received at the server to the time at which the last response packet was sent by the server on the same TCP connection. By this definition if there is only one packet in the response message then the server response time is equal to the server reaction time. In the above figure the width of the coloured box denotes the server response time.
20
Network Overhead Time
Network overhead is the sum of times elapsed during data transfer phases in the responding state. A data transfer phase is identified as time during which there is at least one response packet unacknowledged by the client. A data transfer phase is exited when the client acknowledges all outstanding response packets. Time spent in one data transfer phase is denoted as the width of one coloured box in the above figure. The network overhead is also illustrated as the sum of the widths of all coloured boxes.
21
Client Window Zero Delay
Client window zero delay is the sum of times elapsed during all client delay phases of the transaction in the responding state. A client delay phase is identified as the time starting from when a client sends a TCP window update packet with a window size of zero to when the client sends a TCP window update packet with a non zero receive window size. It is denoted by the width of the coloured box in the figure above.
STATEMENT OF THE INVENTION
Therefore according to the present invention, a non intrusive method to monitor on-line transaction processing systems comprises identifying application transactions based on packet data captured on the network, determining the application transaction response time, segregating the said transaction response time into processing time and network response time, determining the time spent by the client in sending a request message to the server, segregating the said time into client and network time and server delay. Determining the said network response time consists of calculating precise time taken up by the application server for processing during the sending of response.
22
In the system of the present a packet dumper and an application performance analyzer is , capable of identifying application transactions from packet data and said packet dumper consists of a packet sniffer to receive data packets, packet writer to store the data .packets and a packet communicator to communicate with the said analyzer. Said analyzer consists of a Packet dump reader, a Connection State Manager, a State Machine Event handler, a Connection Hash Table and a Transaction writer. Said State Machine Event Handler identifies application transactions based on packet data captured on the network, determines application transaction response time and segregates it into processing time and network response time, determines network response time accurately, and determines processing time of the application server. The application of the system is any on line transaction processing TCP/IP application.
The embodiments of the invention as described above and the methods disclosed herein will suggest further modification and alternations to those skilled in the art. Such further modifications and alterations may be made without departing from the sprit and scope of the invention, which is defined by the scope of the following claims.
23
We Claim:
1. A non intrusive method to monitor on-line transaction processing systems comprising identifying application transactions based on packet data captured on the network, determining the application transaction response time, segregating the said transaction response time into processing time and network response time, determining the time spent by the client in sending a request message to the server, segregating the said time into client and network time and server delay.
2. A method as per claim 1, wherein determining the said network response time consists of calculating precise time taken up by the application server for processing during the sending of response.
3. A non intrusive system to perform the method of claim 1 comprising a packet dumper and an application performance analyzer, capable of identifying application transactions from packet data.
4. A system as claimed in claim 3, wherein said packet dumper consists of a packet sniffer to receive data packets, packet writer to store the data packets and a packet communicator to communicate with the said analyzer.
5. A system as claimed in claim 3, wherein said analyzer consists of a Packet dump reader, a Connection State Manager, a State Machine Event handler, a Connection Hash Table and a Transaction writer.
6. A system as claimed in claim 5, wherein said State Machine Event Handler identifies application transactions based on packet data captured on the network.
7. A system as claimed in claim 5, wherein said State Machine Event Handler determines application transaction response time and segregates it into processing time and network response time.
8. A system as claimed in claim 5, wherein said State Machine Event Handler determines network response time accurately.
24 13 DEC 2006
9. A system as claimed in claim 5, wherein said State Machine Event Handler determines processing time of the application server.
10. A system as claimed in claim 3, wherein the application is any on line transaction processing TCP/IP application.
11. A method as per claims 1 and 2 substantially such as herein described with reference to accompanying drawings.
12. A system as per claims 2 to 10 substantially such as herein described with reference to accompanying drawings.
Dated this 9m day of December, 2006
Anand Deshpande
For DSK LEGAL
(Agent for Applicant)
25
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 1574-MUM-2005-FORM 2(TITLE PAGE)-(PROVISIONAL)-(14-12-2005).pdf | 2005-12-14 |
| 1 | 1574-MUM-2005-RELEVANT DOCUMENTS [25-09-2023(online)].pdf | 2023-09-25 |
| 2 | 1574-MUM-2005-CORRESPONDENCE 2(12-12-2006).pdf | 2006-12-12 |
| 2 | 1574-MUM-2005-RELEVANT DOCUMENTS [30-09-2022(online)].pdf | 2022-09-30 |
| 3 | 1574-MUM-2005-RELEVANT DOCUMENTS [23-09-2021(online)].pdf | 2021-09-23 |
| 3 | 1574-MUM-2005-FORM 2(TITLE PAGE)-(COMPLETE)-(13-12-2006).pdf | 2006-12-13 |
| 4 | 1574-MUM-2005-RELEVANT DOCUMENTS [31-03-2020(online)].pdf | 2020-03-31 |
| 4 | 1574-MUM-2005-FORM 1(13-12-2006).pdf | 2006-12-13 |
| 5 | 1574-MUM-2005-RELEVANT DOCUMENTS [26-03-2019(online)].pdf | 2019-03-26 |
| 5 | 1574-MUM-2005-CORRESPONDENCE 1(24-12-2007).pdf | 2007-12-24 |
| 6 | 1574-MUM-2005-Correspondence to notify the Controller (Mandatory) [31-07-2017(online)].pdf | 2017-07-31 |
| 7 | 1574-MUM-2005-Written submissions and relevant documents (MANDATORY) [17-08-2017(online)].pdf | 2017-08-17 |
| 7 | 1574-mum-2005-claims (complete).pdf | 2018-08-09 |
| 8 | 1574-MUM-2005-PatentCertificate22-01-2018.pdf | 2018-01-22 |
| 8 | 1574-MUM-2005-CLAIMS(AMENDED)-(23-8-2012).pdf | 2018-08-09 |
| 9 | 1574-MUM-2005-CLAIMS(MARKED COPY)-(23-8-2012).pdf | 2018-08-09 |
| 9 | 1574-MUM-2005-IntimationOfGrant22-01-2018.pdf | 2018-01-22 |
| 10 | 1574-MUM-2005-CORRESPONDENCE 3(9-1-2006).pdf | 2018-08-09 |
| 10 | abstract1.jpg | 2018-08-09 |
| 11 | 1574-MUM-2005-CORRESPONDENCE(25-4-2008).pdf | 2018-08-09 |
| 11 | 1574-MUM-2005_EXAMREPORT.pdf | 2018-08-09 |
| 12 | 1574-MUM-2005-CORRESPONDENCE(27-4-2009).pdf | 2018-08-09 |
| 12 | 1574-MUM-2005-REPLY TO EXAMINATION REPORT(23-8-2012).pdf | 2018-08-09 |
| 13 | 1574-MUM-2005-CORRESPONDENCE(6-8-2009).pdf | 2018-08-09 |
| 13 | 1574-MUM-2005-PETITION UNDER RULE 137(23-8-2012).pdf | 2018-08-09 |
| 14 | 1574-MUM-2005-CORRESPONDENCE(IPO)-(30-1-2008).pdf | 2018-08-09 |
| 14 | 1574-MUM-2005-HearingNoticeLetter.pdf | 2018-08-09 |
| 15 | 1574-MUM-2005-CORRESPONDENCE(IPO)-(FER)-(18-6-2012).pdf | 2018-08-09 |
| 15 | 1574-mum-2005-form-26.pdf | 2018-08-09 |
| 16 | 1574-mum-2005-form-2 (provisional).pdf | 2018-08-09 |
| 16 | 1574-mum-2005-correspondence-received-ver-13122006.pdf | 2018-08-09 |
| 17 | 1574-mum-2005-correspondence-received-ver-14122005.pdf | 2018-08-09 |
| 18 | 1574-mum-2005-description (complete).pdf | 2018-08-09 |
| 18 | 1574-mum-2005-form-2 (complete).pdf | 2018-08-09 |
| 19 | 1574-mum-2005-description (provisional).pdf | 2018-08-09 |
| 20 | 1574-mum-2005-drawings.pdf | 2018-08-09 |
| 20 | 1574-mum-2005-form-1.pdf | 2018-08-09 |
| 21 | 1574-MUM-2005-FORM 13(8-5-2008)-1.pdf | 2018-08-09 |
| 21 | 1574-MUM-2005-FORM 5(8-5-2008).pdf | 2018-08-09 |
| 22 | 1574-MUM-2005-FORM 13(8-5-2008).pdf | 2018-08-09 |
| 22 | 1574-MUM-2005-FORM 5(23-8-2012).pdf | 2018-08-09 |
| 23 | 1574-MUM-2005-FORM 18(8-5-2008).pdf | 2018-08-09 |
| 23 | 1574-MUM-2005-FORM 3(8-5-2008).pdf | 2018-08-09 |
| 24 | 1574-MUM-2005-FORM 26(6-8-2009).pdf | 2018-08-09 |
| 24 | 1574-MUM-2005-FORM 26(23-8-2012).pdf | 2018-08-09 |
| 25 | 1574-MUM-2005-FORM 26(25-3-2008).pdf | 2018-08-09 |
| 26 | 1574-MUM-2005-FORM 26(23-8-2012).pdf | 2018-08-09 |
| 26 | 1574-MUM-2005-FORM 26(6-8-2009).pdf | 2018-08-09 |
| 27 | 1574-MUM-2005-FORM 18(8-5-2008).pdf | 2018-08-09 |
| 27 | 1574-MUM-2005-FORM 3(8-5-2008).pdf | 2018-08-09 |
| 28 | 1574-MUM-2005-FORM 13(8-5-2008).pdf | 2018-08-09 |
| 28 | 1574-MUM-2005-FORM 5(23-8-2012).pdf | 2018-08-09 |
| 29 | 1574-MUM-2005-FORM 13(8-5-2008)-1.pdf | 2018-08-09 |
| 29 | 1574-MUM-2005-FORM 5(8-5-2008).pdf | 2018-08-09 |
| 30 | 1574-mum-2005-drawings.pdf | 2018-08-09 |
| 30 | 1574-mum-2005-form-1.pdf | 2018-08-09 |
| 31 | 1574-mum-2005-description (provisional).pdf | 2018-08-09 |
| 32 | 1574-mum-2005-description (complete).pdf | 2018-08-09 |
| 32 | 1574-mum-2005-form-2 (complete).pdf | 2018-08-09 |
| 33 | 1574-mum-2005-correspondence-received-ver-14122005.pdf | 2018-08-09 |
| 34 | 1574-mum-2005-correspondence-received-ver-13122006.pdf | 2018-08-09 |
| 34 | 1574-mum-2005-form-2 (provisional).pdf | 2018-08-09 |
| 35 | 1574-mum-2005-form-26.pdf | 2018-08-09 |
| 35 | 1574-MUM-2005-CORRESPONDENCE(IPO)-(FER)-(18-6-2012).pdf | 2018-08-09 |
| 36 | 1574-MUM-2005-HearingNoticeLetter.pdf | 2018-08-09 |
| 36 | 1574-MUM-2005-CORRESPONDENCE(IPO)-(30-1-2008).pdf | 2018-08-09 |
| 37 | 1574-MUM-2005-CORRESPONDENCE(6-8-2009).pdf | 2018-08-09 |
| 37 | 1574-MUM-2005-PETITION UNDER RULE 137(23-8-2012).pdf | 2018-08-09 |
| 38 | 1574-MUM-2005-CORRESPONDENCE(27-4-2009).pdf | 2018-08-09 |
| 38 | 1574-MUM-2005-REPLY TO EXAMINATION REPORT(23-8-2012).pdf | 2018-08-09 |
| 39 | 1574-MUM-2005-CORRESPONDENCE(25-4-2008).pdf | 2018-08-09 |
| 39 | 1574-MUM-2005_EXAMREPORT.pdf | 2018-08-09 |
| 40 | 1574-MUM-2005-CORRESPONDENCE 3(9-1-2006).pdf | 2018-08-09 |
| 40 | abstract1.jpg | 2018-08-09 |
| 41 | 1574-MUM-2005-CLAIMS(MARKED COPY)-(23-8-2012).pdf | 2018-08-09 |
| 41 | 1574-MUM-2005-IntimationOfGrant22-01-2018.pdf | 2018-01-22 |
| 42 | 1574-MUM-2005-CLAIMS(AMENDED)-(23-8-2012).pdf | 2018-08-09 |
| 42 | 1574-MUM-2005-PatentCertificate22-01-2018.pdf | 2018-01-22 |
| 43 | 1574-mum-2005-claims (complete).pdf | 2018-08-09 |
| 43 | 1574-MUM-2005-Written submissions and relevant documents (MANDATORY) [17-08-2017(online)].pdf | 2017-08-17 |
| 44 | 1574-MUM-2005-Correspondence to notify the Controller (Mandatory) [31-07-2017(online)].pdf | 2017-07-31 |
| 45 | 1574-MUM-2005-CORRESPONDENCE 1(24-12-2007).pdf | 2007-12-24 |
| 45 | 1574-MUM-2005-RELEVANT DOCUMENTS [26-03-2019(online)].pdf | 2019-03-26 |
| 46 | 1574-MUM-2005-RELEVANT DOCUMENTS [31-03-2020(online)].pdf | 2020-03-31 |
| 46 | 1574-MUM-2005-FORM 1(13-12-2006).pdf | 2006-12-13 |
| 47 | 1574-MUM-2005-RELEVANT DOCUMENTS [23-09-2021(online)].pdf | 2021-09-23 |
| 47 | 1574-MUM-2005-FORM 2(TITLE PAGE)-(COMPLETE)-(13-12-2006).pdf | 2006-12-13 |
| 48 | 1574-MUM-2005-RELEVANT DOCUMENTS [30-09-2022(online)].pdf | 2022-09-30 |
| 48 | 1574-MUM-2005-CORRESPONDENCE 2(12-12-2006).pdf | 2006-12-12 |
| 49 | 1574-MUM-2005-RELEVANT DOCUMENTS [25-09-2023(online)].pdf | 2023-09-25 |
| 49 | 1574-MUM-2005-FORM 2(TITLE PAGE)-(PROVISIONAL)-(14-12-2005).pdf | 2005-12-14 |