Sign In to Follow Application
View All Documents & Correspondence

A System And Method For Network Bandwidth Sizing

Abstract: The present invention relates to a system and method for network bandwidth sizing at the branch level rather than server level. Further, the invention provides the network bandwidth sizing at the branch of an enterprise by taking into account both throughput and response time requirements. The object of the invention is to provide a system and method for bandwidth allocation by doing network capacity planning which can be done accurately for large enterprises for heavy work load and applications by applying a model based upon an Approximate Mean Value Analysis (AMVA) algorithm.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
04 March 2010
Publication Number
45/2012
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2020-02-12
Renewal Date

Applicants

TATA CONSULTANCY SERVICES LIMITED
NIRMAL BUILDING, 9TH FLOOR, NARIMAN POINT, MUMBAI 400021, MAHARASHTRA, INDIA

Inventors

1. MANSHARAMANI RAJESH
TATA CONSULTANCY SERVICES, PERFORMANCE ENGINEERING LAB, GATEWAY PARK, AKRUTI BUSINESS PORT, MIDC ROAD NO 13, ANDHERI(E), MUMBAI 400 093, MAHARASHTRA, INDIA

Specification

FORM 2 THE PATENTS ACT, 1970 {39 of 1970) & THE PATENT RULES, 2003 COMPLETE SPECIFICATION (See Section 10 and Rule 13) Title of invention: A SYSTEM AND METHOD FOR NETWORK BANDWIDTH SIZING APPLICANT: TATA Consultancy Services Limited A company Incorporated in India under The Companies Act, 1956 Having address: Nirmal Building, 9th Floor, Nariman Point, Mumbai 400021, Maharashtra, India The following specification particularly describes the invention and the manner in which it is to be performed. FIELD OF THE INVENTION: This invention relates to the field of bandwidth sizing of the networking. More particularly, the invention relates to a system and method for network bandwidth sizing at a branch level by considering both sizing for throughput and sizing for response times. PRIOR-ART REFERENCES 1. Lazowska, Zahorjan, Graham, Sevcik. Quantitative System Performance. Computer System Analysis using Queueing Network Models, Prentice Hall, 1984. 2. J. D. Little. A Proof of the Queueing Formula L = λ W. Operations Research 9, pp 383-387, 1961. BACKGROUND OF THE INVENTION: Now days many users and administrators access computer network based applications such as on-line shopping, airline reservations, rental car reservations, hotel reservations, on-line auctions, on-line banking, stock market trading via the communication network. Network bandwidth management thus plays a crucial role in ensuring hassle free data transfer for such applications and users. Accordingly, it is important for a service provider (sometimes referred to as "content provider") to provide high-quality uninterrupted services. In order to do so, it has become desirable for such service providers to perform appropriate network bandwidth capacity planning to ensure that they can adequately service the demands placed on their systems by their clients in a desired manner. Thus there arises a need for solutions that ensure that the capacity planning of the network bandwidth is done based upon several factors including workload, priority etc. Network capacity planning is always a challenge for any enterprise due to changing workloads and applications. For example, in a banking system we can have 30% deposit transactions, 40% withdrawal transactions, and 30% inquiry transactions. First, we need to identify which transaction has more weightage or priority and accordingly one has to plan the capacity of the networks as per changing workloads, applications or transactions. The conventional way of network bandwidth sizing is to look at the overall payload requirement per type of transaction and then budget the bandwidth to manage the transaction throughput. For example, if you have an average of 2KB data per transaction, and 100 transactions per second, then you need to size for 200KB/sec or 1.6Mbps of network bandwidth, outside of network overheads. We refer to this method of sizing as 'Sizing for Throughput'. This method of network bandwidth sizing usually works for data centres that support thousands of concurrent users. However, one also.has to plan bandwidth for the user side of the network, in particular, in scenarios such as banking, financial services, and insurance the organization has a number of branches all over the country or all over the world. The number of users per branch may be of the order of 10 to 50. In such cases the conventional method of sizing for throughput may not work for network bandwidth sizing. Consider for example the same payload of 2KB per transaction and a branch with 10 users, on the average submitting a banking transaction once every 50 seconds. Thus we have 10 transactions per 50 seconds, leading to 0.2 transactions/sec. Going by sizing for throughput we need 0.2 x 2KB = 0.4KB/sec = 3.2Kbps of network bandwidth at the branch. However, if the service level agreement is an average of 1 second in the network, then even for a single request we need 2KB/sec = 16Kbps of network bandwidth. In other words, the methodology of sizing for throughput breaks down when we wish to size for response times as well. Several inventions have been made in this domain some of them known to us are described below: US Patent 6765873 describes a connection bandwidth management process and system for use in a high speed packet switching network. One of the main object of the invention is to provide a connection bandwidth management process and system which rely on an efficient monitoring of the network resources occupancy to re-compute the bandwidth allocated to connections boarded on a given link so that the overall bandwidth capacity of the link is not exceeded, while solving the shortcomings of the conventional oversubscription technique. This patent also describes the network bandwidth allocation based on throughput requirements or traffic requirements. However, the invention fails to present a system/process which considers network bandwidth planning by considering both throughput requirements as well as response time requirements. US Patent 7072295 describes the allocation of bandwidth to data flows passing through a network device. The network bandwidth is allocated to committed data traffic based on a guaranteed data transfer rate and a queue size of the network device and bandwidth is allocated to uncommitted data traffic using a weighted maximum/minimum process. The weighted maximum/minimum process allocates bandwidth to the uncommitted data traffic in proportion to a weight associated with the uncommitted data traffic. However, the invention fails to disclose process/ method which performs bandwidth planning rather it proposes a method of bandwidth allocation for the network. Also, the '295 patent does not disclose use of inverse of MVA or AMVA algorithm. US Patent 7336967 describes a Method and system for providing load-sensitive bandwidth allocation. The system, according to one embodiment of the present invention, supports a bandwidth allocation scheme in which the network elements are dynamically assigned bandwidth over the WAN based upon the amount of traffic that the respect network elements have to transmit. The specific bandwidth allocation scheme is designed to ensure maximum bandwidth efficiency (i.e., minimal waste due to unused allocated bandwidth), and minimum delay of return channel data, as described with respect to figures. The scheme is be tunable, according to the mixture, frequency, and size of user traffic. The system ensures that throughput and bandwidth efficiency is preserved during all periods of operations. However, the invention fails to present a system/process which performs planning for network bandwidth by considering both throughput requirements as well as response time requirements. US publication 20080221941 describes a System and Method for Capacity Planning for Computing Systems. The capacity planning framework further includes a capacity analyzer that receives the determined workload profile, and determines a maximum capacity of the computing system under analysis for serving the workload profile while satisfying a defined quality of service (QoS) target. However, in this document, it assumes that sufficient network bandwidth is present for allocation to the network. Also, this patent document does not disclose the sizing for network bandwidth. Suri et al. (Rajan Suri, Sushanta Sahu, Mary Vernon, "Approximate Mean Value Analysis for Closed Queuing Networks with Multiple-Server Stations"; Proceedings of the 2007 Industrial Engineering Research Conference) describe the use of Closed Queuing Networks in modeling various systems such as FMS, CONWIP Material Control, Computer/Communication Systems, and Health Care. Suri et al (2007) explains how to have efficient and accurate approximate MVA computations for multi server stations. However, in the proposed invention, the inventors have modelled network link as a single server station and can accurately use inverse of MVA to arrive at bandwidth given response time requirements. All the above mentioned prior-arts fail to recognize the significance of bandwidth planning rather they focus on bandwidth allocation at the branch level for enterprises which have heavy workloads and applications. Also, all existing techniques, methods and processes don't consider the response time constraints. In order to solve the above mentioned problems, the present invention proposes a system and method for network bandwidth sizing at the branch level rather than server level. It considers both sizing for throughput and sizing for response times for accurate planning of bandwidth allocation of networking for large enterprises which have heavy workloads and applications. Other features and advantages of the present invention will be explained in the following description of the invention having reference to the appended drawings. OBJECTS OF THE INVENTION: The primary object of the invention is to provide a system and method for bandwidth allocation by performing network capacity planning by applying a model based on Approximate Mean Value Analysis (AMVA) algorithm. Another object of the invention is to provide system and method for bandwidth allocation by performing network capacity planning especially for large enterprises having heavy work loads and diverse applications by applying a model based on Approximate Mean Value Analysis (AMVA) algorithm. Yet another object of the invention is to provide a system and method for network bandwidth sizing at the branch level rather than server level. Yet another object of the invention is to provide a system and method for accurate bandwidth planning by considering both "sizing for throughput" and "sizing for response" times. Yet another object of the invention is to provide a system and method for obtaining an accurate estimate of bandwidth by considering response time constraints using an inverse of AMVA algorithm. Yet another object of the invention is to allocate a required bandwidth depending upon the class of transactions, applications after performance of network bandwidth planning. SUMMARY OF THE INVENTION: The present invention proposes a system and a method for network bandwidth sizing at the branch level rather than server level. Further, the invention provides the methodology for network bandwidth sizing at the branch of an enterprise by taking into account both throughput and response time requirements. According to one embodiments of the invention, a system and method, obtain an accurately estimate bandwidth for response time constraints using an inverse of AMVA algorithm. BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS The foregoing summary, as well as the following detailed description of preferred embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there is shown in the drawings example constructions of the invention; however, the invention is not limited to the specific system and method disclosed in the drawings: Figure 1 shows a typical IT system architecture wherein the various branches are connected to different servers through Wide Area Network (WAN). Figure 2 shows a closed Queuing Network Model which will be useful for sizing of network bandwidth at a given branch level. Figure 3 is a flow chart illustrating the workflow of the invention according to one embodiment of the invention. DETAIL DESCRIPTION OF THE INVENTION Some embodiments of this invention, illustrating its features, will now be discussed in detail. The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any methods, and systems similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present invention, the preferred methods, and systems are now described. Accordingly the invention provides a system comprising at least one branch having two or more client computers connected to a datacenter to send the input to datacenter (uplink) and to receive the output from the datacenter (downlink); characterized by the datacenter having one central server having one database server, web server or application server for receiving, storing, processing the input from the user and sending the output to the user through the communication network, Wherein the system allocates the bandwidth per branch of networks of an enterprise taking in to account both a throughput and a response time requirements, comprising the computer implemented steps of: a. Computing a throughput and a think time per class of branch of network of an enterprise, wherein the user will specify a think time per class or a throughput per class as an input; b. Computing an Initial Value for Bandwidth per branch of Network of the enterprise which is satisfied the throughput requirement from an known throughput form the step (a); c. Computing a feasible value for Bandwidth per branch of network of the enterprise which is satisfied the response time requirement using the Approximate mean value analysis (AMVA) and d. Computing a final Bandwidth per branch of network of the enterprise which is satisfied the response time requirement using the algorithm; The disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms. The present invention requires the following inputs for network bandwidth sizing planning: Uplink: is referred as when network bytes sent by each transaction from user to data centre as 'bytes sent to host' Downlink: is referred as when network bytes received per transaction from data centre to user terminal as 'bytes received from host.' Round trip latency time (T): This is the propagation delay on the network link from the user to the data centre and from the data centre to the user. Note that the round trip latency has nothing to do with network bandwidth but it affects response times. Maximum network link utilization (U): The network utilization should not exceed U. Number of classes (C): The number of types of transactions or types of users Number of concurrent users in branch, per class (N|): For each class we need to know the number of concurrent users. A concurrent user is a user who has a session with the data centre. Even if the session is idle we refer to the user as concurrent. Average Think time per class (Z1): This refers to the time spent in the user terminal. That is outside of the user waiting for a response. It can be data entry time, idle time, and time spent in viewing output. Throughput per class (X1): We need to specify either average think or throughput, depending what is most convenient for the user of this methodology. Throughput will be specified as user transactions per second. Average number of bytes sent to host, per class (Pto,i): also referred to as payload of traffic to the data centre. Average number of bytes received from host, per class (Pform): also referred to as payload of traffic from the data centre. Message Overhead, per class (Oi): Message overhead refers to the number of bytes outside of the application transaction payload, for example, for TCP/IP headers and Http headers. We will refer to 0| as a percentage overhead compared to the payload size. For the sake of simplicity the overhead percentage will be taken to be the same for P toand Pfram payloads: Average Number of Round Trips per class( ri );For each transaction class, ri refers to the number of network round trips. This is a function of the application architecture, the underlying protocol, and the network fragmentation. Average Response Time Target per class (Rtargetj): For each class of transactions, the average response target needs to be specified in seconds. After giving all these inputs, the proposed methodology will compute the overall network bandwidth required on the link from the branch to the data centre. The outputs provided by the methodology are: Total Network Bandwidth Required (B): This is the overall bandwidth on the link from branch to data centre considering in to an account application payload, message overhead, and maximum network utilization target. Note that B will be the minimum bandwidth required to meet the input criteria specified in the list above. Average Response Time per Class (R|): This is the actual average response time estimates that needs to be compared against the targets, R,, for each class i. Note that we should have R, s

Documents

Application Documents

# Name Date
1 574-MUM-2010-CORRESPONDENCE(IPO)-(27-10-2010).pdf 2010-10-27
2 OTHERS [06-06-2016(online)].pdf 2016-06-06
3 Examination Report Reply Recieved [06-06-2016(online)].pdf 2016-06-06
4 Description(Complete) [06-06-2016(online)].pdf 2016-06-06
5 Claims [06-06-2016(online)].pdf 2016-06-06
6 Abstract [06-06-2016(online)].pdf 2016-06-06
7 abstract1.jpg 2018-08-10
8 574-MUM-2010_EXAMREPORT.pdf 2018-08-10
9 574-mum-2010-form 3.pdf 2018-08-10
10 574-MUM-2010-FORM 3(29-2-2012).pdf 2018-08-10
11 574-mum-2010-claims.pdf 2018-08-10
11 574-MUM-2010-FORM 3(24-3-2011).pdf 2018-08-10
12 574-MUM-2010-FORM 3(14-8-2012).pdf 2018-08-10
13 574-MUM-2010-FORM 26(16-4-2010).pdf 2018-08-10
14 574-mum-2010-form 2.pdf 2018-08-10
15 574-mum-2010-form 2(title page).pdf 2018-08-10
15 574-MUM-2010-CORRESPONDENCE(17-3-2010).pdf 2018-08-10
16 574-MUM-2010-FORM 18(17-3-2010).pdf 2018-08-10
17 574-mum-2010-form 1.pdf 2018-08-10
18 574-MUM-2010-FORM 1(15-4-2010).pdf 2018-08-10
19 574-mum-2010-drawing.pdf 2018-08-10
20 574-mum-2010-description(complete).pdf 2018-08-10
21 574-mum-2010-correspondence.pdf 2018-08-10
22 574-MUM-2010-CORRESPONDENCE(IPO)-(FER)-(4-6-2015).pdf 2018-08-10
23 574-MUM-2010-CORRESPONDENCE(29-2-2012).pdf 2018-08-10
24 574-MUM-2010-CORRESPONDENCE(24-3-2011).pdf 2018-08-10
25 574-MUM-2010-CORRESPONDENCE(17-3-2010).pdf 2018-08-10
26 574-MUM-2010-CORRESPONDENCE(16-4-2010).pdf 2018-08-10
27 574-MUM-2010-CORRESPONDENCE(15-4-2010).pdf 2018-08-10
28 574-MUM-2010-CORRESPONDENCE(14-8-2012).pdf 2018-08-10
29 574-mum-2010-claims.pdf 2018-08-10
30 574-mum-2010-abstract.pdf 2018-08-10
31 574-MUM-2010-HearingNoticeLetter20-09-2019.pdf 2019-09-20
32 574-MUM-2010-Written submissions and relevant documents (MANDATORY) [04-10-2019(online)].pdf 2019-10-04
33 574-MUM-2010-RELEVANT DOCUMENTS [04-10-2019(online)].pdf 2019-10-04
34 574-MUM-2010-PETITION UNDER RULE 137 [04-10-2019(online)].pdf 2019-10-04
35 574-MUM-2010-PatentCertificate12-02-2020.pdf 2020-02-12
36 574-MUM-2010-IntimationOfGrant12-02-2020.pdf 2020-02-12
37 574-MUM-2010-RELEVANT DOCUMENTS [25-09-2021(online)].pdf 2021-09-25
38 574-MUM-2010-RELEVANT DOCUMENTS [30-09-2022(online)].pdf 2022-09-30
39 574-MUM-2010-RELEVANT DOCUMENTS [28-09-2023(online)].pdf 2023-09-28

ERegister / Renewals

3rd: 04 Mar 2020

From 04/03/2012 - To 04/03/2013

4th: 04 Mar 2020

From 04/03/2013 - To 04/03/2014

5th: 04 Mar 2020

From 04/03/2014 - To 04/03/2015

6th: 04 Mar 2020

From 04/03/2015 - To 04/03/2016

7th: 04 Mar 2020

From 04/03/2016 - To 04/03/2017

8th: 04 Mar 2020

From 04/03/2017 - To 04/03/2018

9th: 04 Mar 2020

From 04/03/2018 - To 04/03/2019

10th: 04 Mar 2020

From 04/03/2019 - To 04/03/2020

11th: 04 Mar 2020

From 04/03/2020 - To 04/03/2021

12th: 04 Mar 2021

From 04/03/2021 - To 04/03/2022

13th: 18 Feb 2022

From 04/03/2022 - To 04/03/2023

14th: 03 Mar 2023

From 04/03/2023 - To 04/03/2024

15th: 09 Mar 2024

From 04/03/2024 - To 04/03/2025

16th: 27 Feb 2025

From 04/03/2025 - To 04/03/2026