Abstract: System(s), method(s) and computer program product to facilitate performance prediction of a multi-threaded application in presence of resource bottlenecks has been disclosed. One or more queuing networks are represented for resources employed to run the multi-threaded application. The resources comprise of software and hardware resources and the queuing network comprises of a hardware queuing network and the software queuing network. A performance metrics comprising one or more parameters is computed by using an iterative technique with a predetermined value of service demand. One or more parameters in form of a throughput value and response time are determined thereby determining the performance of the multi-threaded application.
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
SYSTEM AND METHOD FACILITATING PERFROMANCE PREDICTION OF MULTI-THREADED APPLICATION IN PRESENCE OF RESOURCE
BOTTLENECKS
APPLICANT:
Tata Consultancy Services Limited A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
The following specification describes the invention and the manner in which it is to be performed.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
[001] The present application does not claim priority from any patent application.
TECHNICAL FIELD
[002] The present disclosure in general relates to a method and system predicting
performance of a multi-threaded application. More particularly, the present disclosure
relates to prediction of performance of the multi-threaded application in presence of
resource bottlenecks.
BACKGROUND
[003] In typical multi-tier enterprise system, a software application is usually accessed
by a large number of users. Due to the large number of users, probability of resource bottlenecks has increased drastically. Presence of resource bottlenecks typically hampers performance of the software application. The bottleneck may occur either due to software resources or hardware resources. In order to increase the scalability of the mutli-tier enterprise system, the resource bottlenecks should be detected while software application is being tested.
[004] Most of the multi-tier enterprise systems have employed various methods to
identify the resource bottlenecks as the resource bottlenecks may limit the overall performance of the multi-tier enterprise system. Resource bottlenecks are identified only if the resource bottlenecks occur explicitly during performance testing itself or at a very later stage in a production environment. Discrete event simulation modeling is one of the well known approaches used for predicting performance of the software application. However, the development of such simulation model is a time consuming process.
SUMMARY OF THE INVENTION
[005] This summary is provided to introduce aspects related to system(s) and method(s)
facilitating performance prediction of a multi-threaded application in presence of resource bottlenecks and the aspects are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
[006] The present disclosure relates to a system to facilitate performance prediction of a
multi-threaded software application in presence of resource bottlenecks. The system comprises of a processor and a memory coupled to the processor. The processor is capable of executing a plurality of modules stored in the memory. The plurality of module comprises of a representation module configured to represent one or more queuing networks for resources employed to run the multi-threaded software application. The resources comprise hardware resources and software resources. The queuing networks are one of a hardware queuing network or a software queuing network. The queuing networks are used to detect concurrency level at which resource bottleneck is encountered while accessing the multi-threaded software application. The memory stores a computation module configured to compute a performance metrics for the multithreaded software application. The performance metrics comprises of one or more parameters. The computation module is configured to use an iterative technique with a pre-determined value of service demand to develop a relationship between the software queuing network and the hardware queuing network. The computation module is configured to determine the one or more parameters in a form of a throughput value and a response time value for the multi-threaded software application, thereby determining the performance of the multi-threaded software application.
[007] The present disclosure also relates to a method to facilitate performance prediction of a multi-threaded software application in presence of resource bottlenecks. The method comprises of representing one or more queuing networks of resources employed to run the multi-threaded software application. The resources comprises of hardware resources and software resources. The queuing networks are one of a hardware queuing network or a software queuing network. The queuing networks helps to detect a concurrency level at which resource bottleneck is encountered while accessing the multithreaded software application. The method further provides computing performance metrics for the multi-threaded software application. The performance metrics comprises of one or more parameters. The computing comprises of using an iterative technique with a predetermined value of service demand to develop a relationship between the software queuing network and the hardware queuing network and determining the one or more parameters in a form of a throughput value and a response time value for the multi-
threaded software application, thereby determining the performance of the multi-threaded
software application.
[008] The present disclosure also relates to a computer program product having
embodied thereon a computer program to facilitate performance prediction of a multithreaded software application in presence of resource bottlenecks. The computer program product comprises of a program code for representing one or more queuing networks for resources employed to run the multi-threaded software application. The resources comprise hardware resources and software resources and the queuing networks is one of a hardware queuing network or a software queuing network. The queuing networks are used to detect a concurrency level at which resource bottleneck is encountered while accessing the multi-threaded software application. The computer program product comprises of a program code for computing a performance metrics for the multi-threaded software application. The performance metrics comprises of one or more parameters. The program code for computing comprises of a program code for using n iterative technique with a predetermined value of service demand to develop a relationship between the software queuing network and the hardware queuing network; and a program code for analyzing the performance metrics to check for an occurrence of the bottlenecks. The computer program product comprises of a program code for determining the one or more parameters in a form of a throughput value and a response time value for the multithreaded software application, thereby determining the performance of the multi-threaded software application.
BRIEF DESCRIPTION OF DRAWINGS
[009] The detailed description is described with reference to the accompanying figures.
In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.
[0010] FIG 1 illustrates a network implementation of a system facilitating performance prediction of a multi-threaded application in presence of resource bottlenecks is shown, in accordance with an embodiment of the present subject matter.
[0011] FIG 2 illustrates the system facilitating performance prediction of a multithreaded application in presence of resource bottlenecks, in accordance with an embodiment of the present subject matter.
[0012] FIG 3 illustrates a method facilitating performance prediction of a multi-threaded application in presence of resource bottlenecks, accordance with an embodiment of the present subject matter.
[0013] FIG 4 illustrates a performance metrics in case wherein the critical and non-critical sections are of same size in accordance with an embodiment of the present subject matter.
[0014] FIG 5 illustrates a performance metrics in a case wherein service demand of non critical section is more than twice of service demand critical section in accordance with an exemplary embodiment of the present subject matter.
[0015] FIG 6 illustrates a performance metrics for multi class requests in accordance with an exemplary embodiment of the present subject matter.
[0016] FIG 7 illustrates a performance metrics for resource pooling in accordance with an exemplary embodiment of the present subject matter.
DETAILED DESCRIPTION
[0017] While aspects of described system and method to facilitate performance prediction of a multi-threaded application in presence of resource bottlenecks may be implemented in any number of different computing systems, environments, and/or configurations, the embodiments are described in the context of the following exemplary system.
[0018] Referring now to Figure 1, a network implementation 100 of system 102 to facilitate performance prediction of a multi-threaded application in presence of resource bottlenecks is shown. Multiple queuing networks are represented by the system 102. The multiple queuing networks comprises of a hardware queuing network and a software queuing network. The queuing networks are used to detect a concurrency level at which resource bottlenecks are encountered while accessing the multi-threaded application. By
using a pre-determined service demand value, a performance metrics is computed. The performance metrics comprises of one or more parameters. The one or more parameters predict the performance of the multi-threaded application.
[0019] Although the present subject matter is explained considering that the system 102 is implemented as an application on a server, it may be understood that the system 102 may also be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, and the like. In one implementation, the system 102 may be implemented in a cloud-based environment. It will be understood that the system 102 may be accessed by multiple users through one or more user devices 104-1, 104-2... 104-N, collectively referred to as user 104 hereinafter, or applications residing on the user devices 104. Examples of the user devices 104 may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, and a workstation. The user devices 104 are communicatively coupled to the system 102 through a network 106.
[0020] In one implementation the network 106 may be a wireless network, a wired network or a combination thereof. The network 106 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The network 106 may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further the network 106 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[0021] Referring now to Figure 2, the system 102 is illustrated in accordance with an embodiment of the present subject matter. In one embodiment, the system 102 may include at least one processor 202, an input/output (I/O) interface 204, a memory 208. The at least one processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor 202 is
configured to fetch and execute computer-readable instructions stored in the memory 208.
[0022] The I/O interface 204 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 204 may allow the system 102 to interact with a user directly or through the client devices 104. Further, the I/O interface 204 may enable the system 102 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface 204 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example. LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface 204 may include one or more ports for connecting a number of devices to one another or to another server.
[0023] The memory 208 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 208 may include modules 210 and data 212.
[0024] The modules 210 include routines, programs, objects, components, data structures, etc., which perform particular tasks, functions or implement particular abstract data types. In one implementation, the modules 210 may include a representation module 212 and a computation module 214. The other modules 216 may include programs or coded instructions that supplement applications and functions of the system 102.
[0025] The data 218, amongst other things, serves as a repository for storing data processed, received, and generated by one or more of the modules 216. The data 218 may also include a database 220, and other data 224. The other data 224 may include data generated as a result of the execution of one or more modules in the other module 216.
[0026] The present disclosure relates to a system(s) and method(s) facilitating performance prediction of multi-threaded application in presence of resource bottlenecks. Software queuing network and a hardware queuing network are used to detect presence of any bottleneck while accessing the application. A predetermined service value is used to compute a performance metrics. One or more parameters of performance metrics like
throughput of the multi-threaded application, response time and utilization of individual resources predicts the performance of the multi-threaded application.
[0027] The queuing networks so represented by the representation module 112 refer to analytical models. Analytical models are mathematical approximations of a real world system and may be used to predict certain behaviors of one or more systems being modeled. The queuing network (or queuing models) received and used here predicts behavior of the multi-threaded application in terms of different performance related measures (performance metrics). These measures may include but are not limited to an expected amount of waiting time. The expected time refers to that a given object will spend within a queue before being processed. The performance related measures further include an expected number of objects within a queue at a particular time, a probability that any queue will be empty, a time to service for a particular type of object, or the like. Analytical models (queuing models or queuing networks) are frequently used to predict whether a particular system (here multi-threaded application) will be able to meet established quality of service metrics, such as response time.
[0028] The present system and method predicts performance of a multi-threaded application being accessed by multiple users. The multi-threaded application may be divided into a critical section and a non-critical section. A section of a code that may be executed by only one of many concurrent threads is referred to as the critical section. In general, an access to a critical section is controlled by a lock or semaphore. Remaining section of the multi-threaded application that may be executed simultaneously by any number of concurrent threads is referred to as the non-critical section.
[0029] The resource bottlenecks may occur either due to hardware resources of software resources. The system and method facilitates in finding number of users accessing the multi-threaded application when first bottleneck is encountered. The first bottleneck may occur either due to hardware resource limitation or software resource constraint. The multi-threaded applications may have synchronous or asynchronous message communications.
[0030] The system 102 takes following presumptions while predicting performance of the multi-threaded application:
- The system 102 is in steady state. Analysis of access of the multi-threaded application does not hold good for any transient behavior of the system.
- All waiting for a software resource is implemented only through explicit synchronization constructs.
- The multi-threaded application may have a number of critical sections but it may have only one non-critical section. The non-critical section represents entire portion of the code without any critical sections.
[0031 ] Non-limiting examples of the multi-threaded application comprises of applications written in programming languages with explicit synchronization constructs as in C, C++, Java. In applications written in programming languages it is possible to know exact portion of the code that is synchronized.
[0032] The memory 108 stores the representation 112 module configured to represent queuing networks for resources employed to run the multi-threaded application. The queuing networks are formed to model contention for software and hardware resources. The queuing networks are formed to detect a concurrency level at which resource bottleneck (first resource bottleneck) is encountered while accessing the multi-threaded application.
[0033] The representation module 112 represents hardware queuing network and software queuing network. The hardware queuing network representing all hardware resources. The hardware resources comprises of a CPU (Central Processing Unit), a memory, a disk or a combination thereof. The software queuing network represents software modules corresponding to critical and non-critical sections. The software resources comprises of software modules.
[0034] In the queuing networks, users are represented as customers and referred to as threads. The software queuing network consists of two types of software resources: delay resources and queuing resources. Delay resource corresponds to the non-critical section (NCS) code. As in non-critical section, there is no contention neither queuing happens before executing the software module. The queuing resources correspond to critical sections (CS) which are delimited by synchronization constructs. The multi-threaded application may have any number of critical sections. The SQN can consist of a non-critical section and a number of critical sections.
[0035] While a thread (i.e. the user) is using a software resource, it also uses physical resources such as CPUs and disks. The queuing network associated with the physical resources is called hardware queuing network (HQN). Thus users in the HQN are users that are using the physical resources because of the execution of software modules.
[0036] Figure 4 illustrates software queuing network consisting of a non critical section and two critical section components. All software modules are represented using rectangles whereas all servers are represented using circles. In HQN, users can access one CPU server and any of the two disk servers. Two layers of Queuing network lead to the following important observations:
1. The time spent in the non-critical and critical sections depend on the contention at the physical resources, e.g., CPU and disk.
2. The number of users in the hardware queuing network, contending for the physical resources, is equal to the number of concurrent threads that are not blocked waiting to enter a critical section. Since blocked threads are sleeping, they are not present in the HQN queue.
[0037] Still referring to figure 4, in order to reduce complexity, presumptions have been made. By way of a non limiting example, hardware resources CPU and disk are considered with the assumption that within critical sections, only computation and IO operations are performed and no message communication takes place.
[0038] The computation module 112 is configured to compute a performance metrics for the multi-threaded application by solving the software queuing network and the hardware queuing network by using an iterative technique with a predetermined service demand. The performance metrics comprise of one or more parameters. The one or more parameters predict performance of the multi-threaded application in presence of resource bottlenecks.
[0039] The computation module 112 uses the predetermined service demand to develop a relationship between the software queuing network and the hardware queuing network.
[0040] The computation module 112 executes a mean value analysis theorem (methodology) as the iterative technique on the predetermined service demand. The
mean value analysis theorem is used to derive performance characteristics in form of the
performance metrics. [0041] The computation module 112 executes the mean value analysis theorem to
estimate the performance of the multi-threaded application in presence of N concurrent
users. [0042] While solving the queuing networks, following presumptions have been made:
1. Pre-determined service demand of all the hardware resources at each of the tiers of a multi-tiered application is captured. The predetermined service demand of a hardware resource (e.g., CPU) for a software module refers to the amount of CPU time required to execute the specific module.
2. The pre-determined service demand of all the synchronization events is captured. This may be achieved through instrumentation of the synchronization constructs used by the application. The Synchronization events include synchronized methods, block of code with synchronized keyword, barriers, acquiring followed by releasing of a lock, conditional variables. A method belonging to a class may be declared synchronized. Similarly a block of code may be prefixed by a keyword synchronize to mark it as synchronized section. For a barrier, if a thread arrives the barrier, it will wait till all threads arrive that point of synchronization. In case of a condition variable, a thread waits until another thread calls signal on the same condition variable.
[0043] This is further assumed that there are only two modules in the software layer, a
critical section and a non-critical section. CPU demands of these sections are denoted by
DCS,CPU and DNCS, CPU respectively. Total CPU service demand is obtained by adding these
two values. This demand is used in the hardware queuing network,
a. DCPU = Dcs. CPU + DNCS. CPU (1 a)
[0044] In a multi-core machine with C cores, critical section may execute on only one core at a time whereas non-critical section may execute on any cores. Hence, CPU demand for hardware queuing network is modified as follows: a. DCPU = Dcs. CPU + DNCS. CPUIC (1 b)
[0045] Further, during the execution in the critical section, a thread may also access disk for doing some read and write operation. Thus, demand for critical section and non-critical section includes both demand due to CPU and disk. The total resource demand for critical section is denoted by Dcs and it consists of demand due to CPU and disk.
a. DCs = Dcs, CPU + Dcs. Disk (2) [0046] Similar relationship exists for service demand of non-critical section. The
computation module 112 also considers a residence time. The residence time is total time
spent by a thread at a physical resource and is denoted by RCPU (for CPU) or RDisk (for
disk). [0047] The steps performed by the computation module while executing the mean value
analysis theorem are explained below: [0048] The computation module 312 is configured to use the pre-determined service
demand of the critical section and the non-critical section and saving initial values as
D'cs. CPU and D'NCS CPU. Denote the number of blocked threads at CS by B'cs and initialize
it to zero. [0049] The computation module 112 is configured to solve the software queuing network
with the pre-determined service demands of all critical sections and non-critical sections.
Number of customers (users or threads) in the software layer is same as the number of
users in the access system. [0050] The computation module 112 is configured to obtain the number of threads
blocked at the critical section BCS- Retain this value for comparing it with the number of
blocked threads in the next iteration. [0051] In the next step, the number of customers in the hardware queuing network is
taken as the number of threads which are not blocked at the critical section (N-Bcs)-
Solve the hardware queuing network and obtain the residence time at each of the
hardware resources such as RCPU or RDisk
[0052] The computation module 112 is configured to update the pre-determined service demand of the software modules to reflect the waiting for hardware resources as follows:
a. Dcs, CPU = RCPU * D ics, CPU / {D cs. CPU + D i NCS, CPV)
b. DNCS, CPU = RCPU * Di NCS CPU / (D cs. CPU + D i NCS. CPU)
[0053] Thus, the service demand of the critical section .is iteratively adjusted to account for resource contention at the hardware resources.
[0054] The computation module checks whether Dcs. CPU < D ics. CPU- This step prevents the critical section demand to be lower than the initial demand. If it is true, then the final solution is obtained.
[0055] The computation module 112 find a difference between difference between the number of blocked threads at the critical section in this iteration Bcs and in the previous iteration 2ics- If the difference between Bcs and B,cs is less than a specific limit (or a predetermined value) (|Bcs -Bics| < ε) where e is a small number or BICS > Bcs, then the final solution is obtained; else assign B,cs = Bcs and go back to Step 2 i.e. again solve the software queuing network with the pre-determined service demands of all critical sections and non-critical sections.. Epsilon (the predetermined value) includes a small decimal number less than 1. Epsilon indicates that the error in the number of blocked threads has stabilized or converged over a number of iterations. Value of epsilon may be taken as 0.005.
[0056] The computation module 112 computes a final throughput XEST value of software queuing network and hardware queuing network. Average response time of the individual system is obtained using Little's law. Little's law says that if N is the number of customers in the systems and X is the throughput, R is the response time and Z is the think-time used for requests, then the following relationship exists among these parameters.
[0057] N = X*(R+Z)
[0058] Then the utilization of CPU. disk and any other hardware resources is estimated.
For a multi-core machine, the average CPU utilization per core is obtained from the
estimated throughput and total CPU service demand as given in Equation (la).
UCPU = ( DCPU x XEST )/C (3)
[0059] Given the pre-determined service demand of all critical sections and non-critical section, the above mean value analysis algorithm is used to obtain performance of the system until the throughput saturates.
[0060] The system 102 is configured to validate with multi-class requests where requests from users belong to multiple classes and each class of requests has its own software module that may contain a critical section. In Multi-class performances prediction, two threads may simultaneously be inside their critical sections. Further, critical sections belonging to different classes may have different service demands. Further on a multi-core machine, non-critical sections of all the classes may run simultaneously on different cores.
[0061] The system 102 is configured to predict performance of the multi-threaded application where resource pooling is used. Resource pooling refers to cases where a shared resource is a pool of resources. Non limiting examples are DB connection pool or thread pool. Thread pool refers to a number of threads created by an application server to handle requests from concurrent users. At a time a single thread exclusively serves a user. Similarly Database connection pool refers to a cache of database connections created to execute commands on DB more efficiently.
[0062] The pool size indicates the maximum number of shared resources that may be used at a given time. Hence, the maximum number of threads that may use the resource is equal to pool size. While configuring the system for performance prediction of the multithreaded application for pooled resources, resource pooling is modeled as a multi-server in software queuing network and the resource demand corresponds to that of a critical section of a single thread.
[0063] The system 102 provides a solution for predicting the performance of the multithreaded application in presence of application by computing a performance metrics. The solution in terms of performance metrics converges. The three conditions for which the solution provided by the system 102 may converge are:
1. The predetermined service demand of the critical section may not be less than the initial service demand before going through the iterative steps.
2. After a few initial iterations, the number of blocked threads may not reduce from the previous iterations.
3. The difference in the number of blocked threads between a new iteration and a previous iteration is less than a small number.
[0064] If these conditions are satisfied, the computing module 112 may not perform the iterative steps and may perform a final step of computing the performance metrics.
[0065] The system 102 is configured to predict performance of the multi-threaded Java program used as the multi-threaded application with critical sections. It is assumed that the Java program has single critical section and single non critical section. Pre-determined service demand has been varied for each of the section and scalability of the multithreaded application with the number of users is observed.
[0066] The results obtained for various service configurations are listed below:
Server category Features
High range services 8 Core CPU 2.66 GHz Xeon with 1MB L2 cache, 8 GB Physical RAM
Middle range services Quad Core AMD Opteron CPU 2.19 GHz with 2MB L2 cache, 4 GBRAM
Low range services Intel ® Core Duo CPU 2.33 GHz with 4MB Cache, 2 GB RAM
Table 1 Server categories for sample application The performance prediction involved software modules that make use of mainly CPU as the hardware resource which means the software modules perform computation and no disk or network activities are performed. Three different combinations of critical and non-critical sections are considered.
Scenario 1: The critical section and non critical section are
performing similar operations such that their service
demands (pre-determined service demand) are almost the
same.
Scenario 2: The critical section performs twice the amount
of operations as compared to non-critical section. Hence
the service demand of the critical section is more than twice
of service demand of non-critical section.
Scenario 3: The non-critical section service demand is
double the amount as the service demand of critical section.
[0067] In each of the above listed scenario, the Java program is accessed by concurrent
users and throughput of the program is observed as the concurrency is varied. Throughput
is expressed as the number of iterations completed per unit time (per second). In
iteration, the program executes once the critical section and the non critical section. In
each of the cases, CPU is the only hardware resource that is considered. During the test
interval, average throughput, response time of the application and utilization of CPU are
measured. Throughput is measured during the entire test interval and response time is
taken as the average time taken to complete an iteration of the program consisting of a
critical section and a non-critical section.
[0068] By way of non limiting example. Figure 5 illustrates performance prediction is
explained for a case where the critical and non-critical sections are of same size. The
computation module computes a performance metrics by calculating a throughput value.
The throughput value saturates due to critical section bottleneck at 2000 users and the
average CPU utilization does not go beyond 67%. At 3000 users, the throughput value so
predicted is 1327 iterations per second whereas the actual throughput value is 1360
iterations per second.
[0069] By way of another non limiting example, Figure 6 illustrates performance
prediction of the multi-threaded application where the critical and non critical sections
are of different sizes. Here, the threads (users or customers) spend different amount of
time in the critical section and non critical section. Considering scenario 3 (as discussed
above), on a mid range server where service demand of the non critical section is twice
that of the critical section. The figure 6 illustrates the predicted and actual throughput
value of the multi-threaded application and utilization of CPU.
[0070] It is observed that the predicted throughput is within 10-15% inaccuracy of the
actual throughput. For 2000 users, the throughput saturates at 890 iterations/sec and CPU
utilization is 73%. The values predicted are 825 iterations/sec and 67%. The difference in
the observed and predicted values is attributed to the high service demand estimated
using single user test of the application. The system is also configured to predict performance of the multi-threaded application in situations where operations within critical section not only use CPU but also perform I/O operations on disk.
[0071] By way of another non limiting example, Figure 7 illustrates performance prediction of the multi-threaded application where critical sections are having multi-class request. In this case, requests from users belong to multiple classes and each class of requests has its own software module that may contain a critical section. In this case only two classes of requests are considered and the critical sections belonging to different classes have different demands. Figure 7 illustrates the predicted throughput for individual classes and overall CPU utilization from different requests.
[0072] In this scenario, both the requests use the same number of users. For example when 500 users are accessing request of class 1, another 500 users are accessing request of class 2. Class 1 critical section CS1 has service demand 0.50 ms and class 2 critical section CS2 has service demand 1.9 ms. Since critical section of class 2 has higher demand, it reaches saturation earlier at 500 users itself whereas class I critical section continues to have higher throughput until 1200 users. From the figure 7, it may be verified that in multi-class scenario as well, the system 102 is able to predict the throughput for individual classes and overall CPU utilization. However, the predicted throughput and utilization for class 2 are about 10% higher than the actual throughput and utilization. This is due to higher service demand estimated using timer routine around a critical section. The system 102 attributes to higher service demand computed for critical sections. Service demand is obtained through instrumentation of program code corresponding to the synchronization block. The one or more modules captures start-time and end-time of synchronization block to obtain the service demand.
[0073] By way of another non limiting example, Figure 8 illustrates performance prediction of the multi-threaded application where critical sections are having resource pooling. The resource pool size of 50 and 100 are used and number of users is varied. The service demand for resource pooling is taken as 13 ms which is relatively higher compared to earlier experiments. This is to restrict the number of users the application scales up-to.
[0074] Figure 8 shows total throughput of the entire resource pooling and per core CPU utilization as the number of users is varied on a mid-range server. It may be observed that throughput and utilization predicted by the system 102 indicate the actual values. Due to multiple resources in the pool, throughput is found to be much higher in this case and the application also scales to a higher number of users.
[0075] The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300 or alternate methods. Additionally, individual blocks may be deleted from the method 300 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method 300 may be considered to be implemented in the above described system 102.
[0076] At block 302, one or more queuing networks are represented for resources employed to run the multi-threaded application to detect a concurrency level at which resource bottlenecks is encountered while accessing the multi-threaded software application.
[0077] At block 304, a performance metrics is computed for the multi-threaded software application. The performance metrics comprises of one or more parameters.
[0078] At block 306, using an iterative technology with a pre-determined value of service demand to identify a relationship between the software queuing network and the hardware queuing network.
[0079] At block 308, value of the one or more parameters is determined in a form of a throughput value and a response time value for the multi-threaded software application, thereby determining the performance of the multi-threaded software application.
[0080] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments of the invention. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
WE CLAIM:
1. A system to facilitate performance prediction of a multi-threaded software application in
presence of resource bottlenecks, the system comprising: a processor; and a memory coupled to the processor, wherein the processor is capable of executing
a plurality of modules stored in the memory, and wherein the plurality of module
comprising:
a representation module configured to represent one or more queuing networks for resources employed to run the multi-threaded software application, wherein the resources comprise hardware resources and software resources, wherein the queuing networks is one of a hardware queuing network or a software queuing network, wherein the queuing networks are used to detect a concurrency level at which first resource bottleneck is encountered while accessing the multithreaded software application;
a computation module configured to compute a performance metrics for the multi-threaded software application, wherein the performance metrics comprises of one or more parameters, the computation module is configured to:
using an iterative technique with a pre-determined value of service demand to develop a relationship between the software queuing network and the hardware queuing network; and
determine the one or more parameters of the performance metrics in a form of a throughput value and a response time value for the multithreaded software application, thereby determining the performance of the multi-threaded software application.
2. The system of claim 1, wherein the queuing network represents software and hardware resources contention.
3. The system of claim 1, wherein the software resources comprises of software modules and hardware resources comprises of a CPU (Central Processing Unit), a memory, a disk or a combination thereof.
4. The system of claim 1, wherein the software queuing network represents software modules corresponding to critical and non-critical sections, and the hardware queuing network represents all hardware resources.
5. The system of claim 1, wherein the critical section corresponds to a single shared resource and the non-critical section corresponds to a portion of code that may be executed simultaneously by any number of users..
6. The system of claim 1, wherein the resource bottlenecks comprises of software bottlenecks, hardware bottlenecks, and bottlenecks due to anyone of a thread pool, a database connection pool, sockets or locks on data item, or a combination thereof.
7. The system of claim 1, wherein the predetermined value of service demand comprises of:
a service demand of all the hardware resources at each of tiers of the software application; and
a service demand of all synchronization events.
8. The system of claim 1, wherein the computation module uses a mean value analysis
methodology as the iterative technique on the pre-determined value of service demand,
wherein the computation module is configured to:
obtain one or more blocked threads at a critical section by solving the software queuing network , the blocked threads are retained to provide comparison with iterative blocked threads;
obtain a residence time at each of the hardware resources by solving the hardware queuing network;
update an initial critical section service demand to an updated critical section service demand to reflect a waiting for hardware resources by using the residence time;
comparing the blocked threads with the iterative blocked threads and the residence time;
check if the updated critical section service demand is lower than the initial critical section service demand;
identify a difference between the number of blocked threads for the initial critical section demand and the number of blocked threads due to updated critical section demand, to check if the updated service demand is lower than the original critical section demand; and
repeat the solving of software and hardware queuing networks in case the difference in the number of blocked threads is larger than a predefined value of blocked threads; and
obtaining the performance metrics with the updated critical section service demand for the software application in case the difference in the number of blocked threads .is lower than a predefined value of blocked threads, wherein the predefined value indicates that error in the number of blocked threads has stabilized or converged over a number of iterations.
9. The system of claim I. wherein the computation module computes the performance metrics for the software application for any number of users.
10. The system of claim 1, wherein the one or more parameters of performance metrics comprises of an average CPU (Central Processing Unit) utilization per core from the estimated throughput and total CPU service demand.
11. The system of claim 1. wherein the performance of the software application is predicted until the throughput saturates.
12. The system of claim 1, wherein the computing of the performance metrics converges when the error in the number of blocked threads stabilizes.
13. A method to facilitate performance prediction of a multi-threaded software application in presence of resource bottlenecks, the method comprising:
representing one or more queuing networks for resources employed to run the multi-threaded software application, wherein the resources comprise hardware resources and software resources, wherein the queuing networks is one of a hardware queuing network or a software queuing network, wherein the queuing networks are used to detect a concurrency level at which resource bottleneck is encountered while accessing the multi-threaded software application;
computing a performance metrics for the multi-threaded software application, wherein the performance metrics comprises of one or more parameters, the computing comprising:
using an iterative technique with a predetermined value of service demand to develop a relationship between the software queuing network and the hardware queuing network; and
analyzing the performance metrics to check for an occurrence of the bottlenecks; and
determining the one or more parameters in a form of a throughput value and a response time value for the multi-threaded software application, thereby determining the performance of the multi-threaded software application.
14. The method of claim 12, wherein the queuing network is indicative of software and hardware resources contention.
15. The method of claim 12, wherein the software resources comprises of software modules and hardware resources comprises of a CPU (Central Processing Unit), a memory, a disk or a combination thereof.
16. The method of claim 12, wherein the software queuing network represents software modules corresponding to critical and non-critical sections, and the hardware queuing network represents all hardware resources.
17. The method of claim 12, wherein the critical section critical section corresponds to a single shared resource and the non-critical section corresponds to a portion of code that may be executed simultaneously by any number of users.
18. The method of claim 12, wherein the resource bottlenecks comprises software bottlenecks, hardware bottlenecks, and bottlenecks due to anyone of a thread pool, a database connection pool, sockets or locks on data item, or a combination thereof.
19. The method of claim 12, wherein the predetermined value of the service demand
comprises of:
a service demand of all the hardware resources at each of tiers of the software application; and
a service demand of all synchronization events.
20. The method of claim 12, wherein the performance metrics is computed by executing a
mean value analysis methodology as the iterative technique on the predetermined value
of the service demand, wherein the execution of the mean value analysis methodology
comprises:
obtaining one or more blocked threads at a critical section by solving the software queuing network , the blocked threads are retained to provide comparison with iterative block threads;
obtaining a residence time at each of the hardware resources by solving the hardware queuing network;
updating an initial critical section service demand to an updated critical section service demand to reflect a waiting for hardware resources by using the residence time and
comparing the blocked threads with the iterative blocked threads and the residence time;
checking if the updated critical section service demand is lower than the initial critical section service demand:
identifying a difference between number of blocked threads for the earlier critical section demand and the updated critical section demand to, to check if the number of blocked threads due to updated critical section is lower than the critical section demand in the earlier iteration;
repeating the solving of software and hardware queuing networks in case the difference is less than a predefined value of blocked threads; and
obtaining the performance metrics with the updated critical section service demand for the software application in case the difference in the number of blocked threads is lower than a predefined value of blocked threads, wherein the predefined value
indicates that the error in number of blocked threads has stabilized or converged over a number of iterations.
21. The method of claim 12, wherein the performance prediction estimates the performance metrics for the software application for any number of users.
22. The method of claim 12, wherein the one or more parameters of performance metrics comprises of an average CPU (Central Processing Unit) utilization per core from the estimated throughput and total CPU service demand.
23. The method of claim 12, wherein the performance of the software application is predicted until the throughput saturates.
24. The method of claim 12, wherein performance metrics is computed for critical section and non-critical section of same size, critical and non-critical sections of different size, critical section with multi-class requests, critical section for resource pooling or a combination thereof.
25. A computer program product having embodied thereon a computer program to facilitate performance prediction of a multi-threaded software application in presence of resource bottlenecks, the computer program product comprising:
a program code for representing one or more queuing networks for resources employed to run the multi-threaded software application, wherein the resources comprise hardware resources and software resources, wherein the queuing networks is one of a hardware queuing network or a software queuing network, wherein the queuing networks are used to detect a concurrency level at which resource bottleneck is encountered while accessing the multi-threaded software application;
a program code for computing a performance metrics for the multi-threaded software application, wherein the performance metrics comprises of one or more parameters, the program code for computing comprising:
a program code for using an iterative technique with a predetermined
value of service demand to develop a relationship between the software queuing
network and the hardware queuing network; and
a program code for analyzing the performance metrics to check for an
occurrence of the bottlenecks; and
a program code for determining the one or more parameters in a form of a throughput value and a response time value for the multi-threaded software application, thereby determining the performance of the multi-threaded software application.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 3699-MUM-2013-FORM 26(12-12-2013).pdf | 2013-12-12 |
| 1 | 3699-MUM-2013-FORM 4 [15-12-2022(online)].pdf | 2022-12-15 |
| 2 | 3699-MUM-2013-CORRESPONDENCE(12-12-2013).pdf | 2013-12-12 |
| 2 | 3699-MUM-2013-IntimationOfGrant27-07-2022.pdf | 2022-07-27 |
| 3 | ABSTRACT1.jpg | 2018-08-11 |
| 3 | 3699-MUM-2013-PatentCertificate27-07-2022.pdf | 2022-07-27 |
| 4 | 3699-MUM-2013-Response to office action [25-05-2022(online)].pdf | 2022-05-25 |
| 4 | 3699-MUM-2013-FORM 3.pdf | 2018-08-11 |
| 5 | 3699-MUM-2013-US(14)-HearingNotice-(HearingDate-18-08-2021).pdf | 2021-10-03 |
| 5 | 3699-MUM-2013-FORM 2.pdf | 2018-08-11 |
| 6 | 3699-MUM-2013-Response to office action [14-09-2021(online)].pdf | 2021-09-14 |
| 6 | 3699-MUM-2013-FORM 2(TITLE PAGE).pdf | 2018-08-11 |
| 7 | 3699-MUM-2013-Written submissions and relevant documents [01-09-2021(online)].pdf | 2021-09-01 |
| 7 | 3699-MUM-2013-FORM 18.pdf | 2018-08-11 |
| 8 | 3699-MUM-2013-FORM 1.pdf | 2018-08-11 |
| 8 | 3699-MUM-2013-Correspondence to notify the Controller [12-08-2021(online)].pdf | 2021-08-12 |
| 9 | 3699-MUM-2013-FORM 1(5-12-2013).pdf | 2018-08-11 |
| 9 | 3699-MUM-2013-FORM-26 [12-08-2021(online)]-1.pdf | 2021-08-12 |
| 10 | 3699-MUM-2013-DRAWING.pdf | 2018-08-11 |
| 10 | 3699-MUM-2013-FORM-26 [12-08-2021(online)].pdf | 2021-08-12 |
| 11 | 3699-MUM-2013-CLAIMS [26-12-2019(online)].pdf | 2019-12-26 |
| 11 | 3699-MUM-2013-DESCRIPTION(COMPLETE).pdf | 2018-08-11 |
| 12 | 3699-MUM-2013-COMPLETE SPECIFICATION [26-12-2019(online)].pdf | 2019-12-26 |
| 12 | 3699-MUM-2013-CORRESPONDNECE.pdf | 2018-08-11 |
| 13 | 3699-MUM-2013-CORRESPONDENCE(5-12-2013).pdf | 2018-08-11 |
| 13 | 3699-MUM-2013-FER_SER_REPLY [26-12-2019(online)].pdf | 2019-12-26 |
| 14 | 3699-MUM-2013-CLAIMS.pdf | 2018-08-11 |
| 14 | 3699-MUM-2013-OTHERS [26-12-2019(online)].pdf | 2019-12-26 |
| 15 | 3699-MUM-2013-ABSTRACT.pdf | 2018-08-11 |
| 15 | 3699-MUM-2013-FER.pdf | 2019-06-26 |
| 16 | 3699-MUM-2013-ABSTRACT.pdf | 2018-08-11 |
| 16 | 3699-MUM-2013-FER.pdf | 2019-06-26 |
| 17 | 3699-MUM-2013-OTHERS [26-12-2019(online)].pdf | 2019-12-26 |
| 17 | 3699-MUM-2013-CLAIMS.pdf | 2018-08-11 |
| 18 | 3699-MUM-2013-CORRESPONDENCE(5-12-2013).pdf | 2018-08-11 |
| 18 | 3699-MUM-2013-FER_SER_REPLY [26-12-2019(online)].pdf | 2019-12-26 |
| 19 | 3699-MUM-2013-COMPLETE SPECIFICATION [26-12-2019(online)].pdf | 2019-12-26 |
| 19 | 3699-MUM-2013-CORRESPONDNECE.pdf | 2018-08-11 |
| 20 | 3699-MUM-2013-CLAIMS [26-12-2019(online)].pdf | 2019-12-26 |
| 20 | 3699-MUM-2013-DESCRIPTION(COMPLETE).pdf | 2018-08-11 |
| 21 | 3699-MUM-2013-DRAWING.pdf | 2018-08-11 |
| 21 | 3699-MUM-2013-FORM-26 [12-08-2021(online)].pdf | 2021-08-12 |
| 22 | 3699-MUM-2013-FORM 1(5-12-2013).pdf | 2018-08-11 |
| 22 | 3699-MUM-2013-FORM-26 [12-08-2021(online)]-1.pdf | 2021-08-12 |
| 23 | 3699-MUM-2013-Correspondence to notify the Controller [12-08-2021(online)].pdf | 2021-08-12 |
| 23 | 3699-MUM-2013-FORM 1.pdf | 2018-08-11 |
| 24 | 3699-MUM-2013-Written submissions and relevant documents [01-09-2021(online)].pdf | 2021-09-01 |
| 24 | 3699-MUM-2013-FORM 18.pdf | 2018-08-11 |
| 25 | 3699-MUM-2013-Response to office action [14-09-2021(online)].pdf | 2021-09-14 |
| 25 | 3699-MUM-2013-FORM 2(TITLE PAGE).pdf | 2018-08-11 |
| 26 | 3699-MUM-2013-US(14)-HearingNotice-(HearingDate-18-08-2021).pdf | 2021-10-03 |
| 26 | 3699-MUM-2013-FORM 2.pdf | 2018-08-11 |
| 27 | 3699-MUM-2013-Response to office action [25-05-2022(online)].pdf | 2022-05-25 |
| 27 | 3699-MUM-2013-FORM 3.pdf | 2018-08-11 |
| 28 | ABSTRACT1.jpg | 2018-08-11 |
| 28 | 3699-MUM-2013-PatentCertificate27-07-2022.pdf | 2022-07-27 |
| 29 | 3699-MUM-2013-IntimationOfGrant27-07-2022.pdf | 2022-07-27 |
| 29 | 3699-MUM-2013-CORRESPONDENCE(12-12-2013).pdf | 2013-12-12 |
| 30 | 3699-MUM-2013-FORM 4 [15-12-2022(online)].pdf | 2022-12-15 |
| 30 | 3699-MUM-2013-FORM 26(12-12-2013).pdf | 2013-12-12 |
| 1 | 2019-06-2614-42-44_26-06-2019.pdf |