Sign In to Follow Application
View All Documents & Correspondence

Co Operative Caching With Reduced Costs

Abstract: Co-operative caching with Reduced Costs. The present invention relates to content delivery networks and, more particularly, to caching of content in content delivery networks. A method and system for enabling transfer of content in a network. A user requests for content from a remote source and a coordinating module checks if the content is present in any storage device in the network. The coordinating module checks if constraints related to the storage device affects transfer of the content to the user and the content is transferred from the storage device to the user if constraints related to the storage device does not affect transfer of the content to the user. The constraints related to the storage devices are obtained, requests for content made by users are predicted at periodic intervals of time and content is placed in the storage device based on the prediction. FIG. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
24 September 2010
Publication Number
13/2013
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Alcatel Lucent
3  avenue Octave Gréard 75007 Paris  France.

Inventors

1. Sharad Jaiswal
44/45  Shringar Apts.  4th Main  18th Cross  Malleswaram  Bangalore  India  560055
2. Anirban Majumder
Alcatel-Lucent India  Nagawara Village  Kasaba Taluk Outer Ring Road  Manyata Embassy Business Park  BANGALORE- 560045 Karnataka
3. K V M Naidu
Alcatel-Lucent India  Nagawara Village  Kasaba Taluk Outer Ring Road  Manyata Embassy Business Park  BANGALORE- 560045 Karnataka
4. Nisheeth Shrivastava
Alcatel-Lucent India  Nagawara Village  Kasaba Taluk Outer Ring Road  Manyata Embassy Business Park  BANGALORE- 560045 Karnataka

Specification

FORM 2
The Patent Act 1970
(39 of 1970)
&
The Patent Rules, 2005

COMPLETE SPECIFICATION
(SEE SECTION 10 AND RULE 13)

TITLE OF THE INVENTION

Name Nationality Address
Alcatel Lucent France 3, avenue Octave Gréard 75007 Paris,
France.

The following specification particularly describes and ascertains the nature of this invention and the manner in which it is to be performed:-

TECHNICAL FIELD
1. The present invention relates to content delivery networks and, more particularly, to caching of content in content delivery networks.

BACKGROUND
2. Co-operative caching is a strategy where caches co-operate with each other to bring down the network delivery cost. Content downloaded by a cache not only serves its own users but also the users connected with other caches in the network. By co-operative caching, the load on a web-site serving content to users can be reduced. For example, consider a web-site from which users download and rent or purchase full-length movies. Each movie file will have a size of about tens of GBs, and assuming hundreds of thousands of active users, one can easily envision the web-site having to serve peta-bytes of data, over a fat expensive pipe. To reduce the load, users can be made to act as content relays, i.e. requests can be redirected by the web-site to be served by a user who has a cached copy of the request.
3. In networks used by the users, the users use a retail broadband connection in a home or enterprise setting and the network costs will usually be defined by a tiered broadband cost model, i.e. the user pays a fixed amount every month usage within an allowed limit. When the limit is exceeded, the user pays a per-byte charge for usage beyond the allowed limit. For example, the user may have to pay a pre-determined price for usage up to 2 GB, and for upload/download of content beyond 2GB, the user may have to pay extra charges for every unit (which may be kilobyte, megabyte etc). Networks used by the user may have a dedicated network back-haul link, such as a fixed wire-line connection (e.g. DSL) or a wireless data connection (e.g. 3G). The systems present in the networks may also have a radio interface, such as Bluetooth, Wi-Fi, that is used to transfer content to users.
4. Current systems employing co-operative caching such as co-operative web proxies and peer to peer networks may increase cost for the user, if the bandwidth limit for the user has been exceeded.

SUMMARY
5. In view of the foregoing, an embodiment herein provides a method for enabling transfer of content in a network, the method comprising steps of a user requesting for content from a remote source; a coordinating module checking if the content is present in any storage device in the network; the coordinating module checking if constraints related to the storage device affect transfer of the content to the user; transferring the content from the storage device to the user if constraints related to the storage device does not affect transfer of the content to the user. The method further comprising steps of obtaining the constraints related to the storage devices; predicting requests for content made by users at periodic intervals of time; and placing content in the storage device based on the prediction if constraints related to the storage device does not affect transfer of the content to the user. The content to be placed is obtained by performing a rounding technique, done by algorithm Round, on parameters; obtaining set of content to be placed at each of the content caches. The parameters include at least one of storage limit of the storage devices, upload limit of the storage devices, download limit of the storage devices, price per unit of upload exceeding the upload limit of the storage devices, price per unit of download exceeding the download limit of the storage devices, number of the storage devices, the predicted requests at the storage devices and optimum fractional solution of Integer Linear Program (ILP). The content to be placed is obtained by assigning a utility value to placement of the content to each of the storage devices; and placing a copy of the content in the storage devices based on the utility value. The method further comprises steps of determining similarity between pairs of the storage devices; combining the storage devices into a cluster based on the similarity; and transferring the content between the storage devices in the cluster. The method further comprising steps of obtaining the constraints related to storage devices; downloading a single copy of the content from a content server; placing the downloaded copy of the content in the storage devices based on the constraints; and serving the requests for the downloaded copy from the storage devices based on the constraints. The constraints is at least one of storage capacity of the storage devices; upload limit of the storage devices; download limit of the storage devices; price per unit of upload exceeding the upload limit of the storage devices; price per unit of download exceeding the download limit of the storage devices; and number of local the requests that can be served from a local copy of the content. The storage device is at least one of a computer; a mobile phone; Wireless Fidelity (Wi-Fi) access points; Worldwide Interoperability for Microwave Access (WiMAX) access points; and public info-stations. The storage capacity at each of the storage device is made at least double of the memory size required by the content.
6. Embodiments further disclose a Coordinator module enabling transfer of content in a network, the Coordinator module comprising at least one means adapted for allowing a user to request for content from a remote source; checking if the content is present in any storage device in the network; checking if constraints related to the storage device affects transfer of the content to the user; transferring the content from the storage device to the user if constraints related to the storage device does not affect transfer of the content to the user. The Coordinator module is further adapted to obtain the constraints related to the storage devices; requests for content made by users at periodic intervals of time; and coordinate placing of content in the storage device based on the prediction if constraints related to the storage device do not affect transfer of the content to the user. The Coordinator module is further adapted to determine similarity between pairs of the storage devices; combining the storage devices into a cluster based on the similarity; and transferring the content between the storage devices in the cluster. The Coordinator module is further adapted to obtain the constraints of end-nodes; download a single copy of the content from a content server; coordinate placing of the downloaded copy of the content in the storage devices based on the constraints; and serve the requests for the downloaded copy from the storage devices based on the constraints.
7. These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings.

BRIEF DESCRIPTION OF THE FIGURES
8. The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
9. FIG. 1 illustrates a block diagram of users sharing content through co-operative caching, according to an embodiment herein;
10. FIG. 2 is a flowchart depicting a method for placing data in each content cache, according to an embodiment herein;
11. FIG. 3 is a flowchart depicting the Dataplacement algorithm, according to an embodiment herein;
12. FIG. 4 is a flowchart depicting the algorithm Round, according to an embodiment herein;
13. FIG. 5 is a flowchart depicting the Greedy algorithm, according to an embodiment herein;
14. FIG. 6 is a flowchart depicting a method for clustering caches, according to an embodiment herein;
15. FIG. 7 is a flowchart depicting a method for initial placement of content, according to an embodiment herein;

DETAILED DESCRIPTION OF EMBODIMENTS
16. The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
17. The embodiments herein disclose a system and method for reducing the cost of content delivery through a managed network of co-operative caches in the network, while keeping in consideration the limitations of end-node resources. Referring now to the drawings, and more particularly to FIGS. 1 through 7, where similar reference characters denote corresponding features consistently throughout the figures, there are shown embodiments.
18. FIG. 1 illustrates a block diagram of users sharing content through co-operative caching. In a network having content caches 108, content may be obtained from a content server 105 and cached at multiple content caches 108 in the network. The content in a content cache 108 can be downloaded by any user 101 in the network at a later point in time. Content caches 108 are end-nodes in the system that is capable of storing content. The content cache 108 may be a computer, mobile phone, any device with storage space, Wireless Fidelity (Wi-Fi) access points, Worldwide Interoperability for Microwave Access (WiMAX) access points or public info-stations. T¬¬¬¬he content may be any data content, movies, songs, software updates, or any other information content. If a user 101 wants to download a movie from a web-site, then the user 101 may have to download the movie from the content server 105. Once the movie has been downloaded, the movie would be cached in the content cache 108 associated with the user 101 and the movie can then be downloaded by another user A 101 from the content cache 108 associated with the user 101. User 101 is at the end node of the network the user 101 may be associated with one or multiple content caches 108 and user 101 may access the content cache 108 directly through wired means or through wireless means. For example, the content cache 108 may be a memory on a computing device used by the user 101. The content cache 108 may be accessed through Ethernet Local Area Network (LAN), Wi-Fi, WiMAX or through Bluetooth. The content server 105 is a repository of all items requested, uploaded and shared by users 101. Any request for content is either served locally from a content cache 108, from the content server 105 or from any other content cache 108. The content server 105 obtains content from the World Wide Web 107 and caches the content. The World Wide Web 107 comprises of a plurality of web-sites from which data can be downloaded or uploaded onto. The World Wide Web 107 may also be a server serving content in the network or any source from which content can be downloaded by a user 101. For example, the World Wide Web 107 may include the sites YouTube and Flickr. A content aggregator 106 periodically collects new contents from the World Wide Web 107 and makes them available to the content server 105. For example, if there is a new video uploaded onto YouTube, then the content aggregator 106 obtains the video and makes the video available to the content server 105. An analyzer 104 keeps track of popularity of contents across the network. The analyzer 104 estimates the demands of content at each content cache 108 based on the usage statistics, sharing and mobility patterns of users 101. The analyzer 104 also predicts future demand for new content available at the content server 105 and supplies the prediction to a coordinator 103. The analyzer 104 keeps track of user behavior and the state of each cache. For example, analyzer 104103 may keep track of items stored, spare storage and capacity on the back-haul links. Based on the information obtained from the analyzer 104, a coordinator 103 computes the set of contents to be stored at each content cache 108 and provides the computed information to a placement and routing module 102. For example, the coordinator 103 may run an algorithm to compute the set of contents to be stored at each content cache 108. The placement and routing module 102 places the contents on the content caches 108. If request for content cannot be served locally, then the placement and routing module 102 redirects the request to the appropriate cache location. While computing the set of contents the coordinator also considers the resources available to each user 101. For example, the resources available to the user 101 may be the storage capacity at the content cache 108 associated with user 101, the upload/download limits of the content cache 108 and any other resource available to the user 101. The coordinator 103 computes the set of contents to be placed at each content cache 108 in such a manner as to reduce the cost of content delivery to each user 101 and at the same time reduce the load on the web-site that hosts the content. For example, if user B 101 has reached the upload limit and if further uploads from the content cache 108 associated with user B 101 would be charged, then the coordinator 103 obtains the required content from the content server 105 or from any other content cache 108 instead of obtaining the content from the content cache 108 associated with user B 101.
19. If user 101 requires some content, then the content would be searched for in the local content cache 108 of the user 101. If the content is present in the local content cache 108, then the request would be served from the content cache 108 and there would not be any increase in the value of amount of content downloaded onto the content cache 108. If the content is not present in the local content cache 108, then the request for the content would be sent to the placement and routing module 102. The placement and routing module 102 routes the request to the appropriate content cache 108. For example, if the required content is present in the content cache 108 associated with user C 101, then the request would be routed to the content cache 108 associated with user C 101. If the content cache 108 associated with user C 101 has not exceeded its upload limit, then the content would be obtained from the content cache 108 and sent to the requesting user 101. If the required content is not present in any of the content caches 108, then the required content would be obtained from the content server 105 and sent to the requesting user 101. Once the content has been downloaded by the requesting user 101, the user 101 can cache the content in the content cache 108 and allow requests for the content from other users 101 in the network to be serviced from his content cache 101. The user 101 may be given an incentive for allowing requests to be serviced from his content cache 101 and the user may subscribe to the service of co-operative caching where users 101 in the network allow their requests for content to be serviced from their content caches 108. For example, the Internet Service Provider (ISP) may be the service provider and the user 101 may be given a discount on the next download for allowing requests to be serviced from his content cache 108. The service provider mat act as an intermediary between the web-sites and the users 101. The web-site can give the service provider information about the content that can potentially be downloaded by users 101 and statistics of usage of the content. The service provider can incentivize users 101 to set aside some local storage and spare network resources for co-operative caching. Users 101 can also upload content onto the World Wide Web 107. When users 101 upload or share content, the content caches 108 push the content to the content server 105. The content caches 108 are also responsible for providing the usage details and other statistics (for example, mobility of users) to the analyzer 104 component.
20. The content server 105 obtains the prediction of requests at each content cache 108 and computes the set of content to be placed at each content cache 108 in order to service the requests. The set of content can be placed at each content cache 108 in such a manner as to reduce the cost of content delivery to each content cache 108. The set of content are computed by taking into consideration the resources available to each content cache 108 and by trying to reduce the load on each web-site. The placement of content can be computed by considering the network to have n content caches 108, C= with the content server 105 being .For content cache 108 , and specifies the limit on data that can be uploaded and downloaded, respectively, at the content cache 108 without incurring any extra charge for the upload/download. If the upload/download at the content cache 108 goes beyond / , then the user 101 may have to pay extra charges for the upload/download. For any data upload beyond the allowed limit, denote the price per-byte paid by the user 101 for exceeding upload limit and for any data download beyond the allowed limit, denote the price per-byte paid by the user 101 for exceeding download limit. Let denote the amount of storage at each content cache 108. Let there be m objects, and set of m data objects O= and assume the knowledge of an aggregated number of requests of object at cache . The upload limit on a content cache 108 is determined by the broadband plan at each node and determines the amount of data that can be served by a particular content cache 108 to other nodes. The storage space is kept greater than the upload limit, that is , with the additional storage quantity to be used for serving repetitive local requests. We simply assume that , and therefore, , where is a constant across all content caches 108 in the network. The set of objects, O= , are the content requests at the content caches 108 at periodic intervals of time. The notation refers to both the object and the size. The periodic interval, for example, could be in the order of a day or a few hours. is the aggregated number of requests of object at cache . Any request for at will incur zero cost if the request served locally. If the request is not served locally, then can be fetched from another cache which has a copy of . If is fetched from another cache, cache , then both and would be increased by the amount . If no content cache 108 has a copy of , then , will be downloaded from the content server 105. When all the requests have been served, any content cache 108, , incurs a cost of . Here is the amount of content downloaded into cache and is the amount of content uploaded from cache . The overall cost of content delivery is thus . The content server 105, , does not download anything as the content server 105 does not have any local requests to serve, and consequently the values of and are used while computing the set of content to be placed at each content cache 108. The content server 105 typically has a high data-rate connection and hence has typically a much cheaper per byte transmission costs as compared to the content caches 108, that is . Once a node’s upload limit is reached, instead of further using the back-haul link of a node to serve another content cache 108, it will be cheaper to serve the request directly from the content server 105. In order to reduce the load on the content server 105, only the spare upload and download capacities of content caches 108 are used. Once the upload and download capacities of content caches 108 have been exhausted, further requests for content would be serviced from the content server 105. The problem of computing the set of content to be placed at each content cache 108 can be formulated as an Integer Linear Program (ILP) and can be written in equation form as

--------------equation 1

Here is an indicator variable denoting whether the object is stored at cache and is the number of requests of object at cache that are served from cache . The first constraint expresses upload limits, the second constraint expresses storage limits, the third constraint specifies that all the requests must be satisfied, while the fourth makes sure that content is only served from a content cache 108 that stores the required content. The cost function, represented by equation 1, has two terms, the first term is the cost of serving all the requests from and the second term is the cost paid by the content caches 108 when their data download crosses the allowed limit. The problem of computing the set of content to be placed at each content cache 108 can also be called as a data placement problem and can be solved by running an algorithm. The data placement problem can be solved by running a Dataplacement algorithm or by a Greedy algorithm. The algorithms solve the data placement problem and at the same time reduce the cost of content delivery to users 101.
21. FIG. 2 is a flowchart depicting a method for placing data in each content cache. In a network having content caches 108, content may be obtained from a content server 105 and cached at various content caches 108 in the network. The content that is cached in a content cache 108 can be downloaded by any user 101 in the network at a later point in time. Any request for content is either served locally from a content cache 108, from the content server 105 or from any other content cache 108. If the content is present in the local content cache 108, then the request would be served from the content cache 108 and there would not be any increase in the value of amount of content downloaded onto the content cache 108. If the content is not present in the local content cache 108, then the request would be served from any other content cache 108 or from the content server 105 and there would be a corresponding increase in the value of amount of content downloaded onto the requesting content cache 108. As long as the amount of content uploaded or downloaded are within allowed limits, there would not be any extra charge incurred by the user 101 of the content cache 108. In order to keep the cost of content delivery to content caches 108 at a minimum value, the data placement problem is solved and content is placed in each content cache 108 in such a manner as to service requests from content caches 108 and minimize the cost incurred by users 108 while the request is serviced. In order to place data in each content cache 108, the requirements are obtained (201). For example, the requirements may be the number of caches, the number of objects, the set of caches, set of objects, storage capacity at each cache, upload and download limits of each cache, price per unit of upload exceeding the allowed limit, price per unit of download exceeding the allowed limit, demand for an object at a particular cache and any other constraint. The term cache refers to both the content server 105 and the content caches 108. The contents across the caches are then analyzed (202). For example, the contents may be analyzed to determine the popularity of contents across the network, determine usage statistics of contents and sharing and mobility patterns of users 101. Based on the analysis, the demand for content across the network is then predicted (203). The data placement problem is then solved (204) and content is placed at each content cache 108 in such a manner as to service the predicted future demand for content across the network. The data placement problem is solved keeping in mind the resources available to each content cache 108 and to reduce the cost incurred in delivering content to the caches. For example, the data placement problem may be solved by running the Dataplacement algorithm or the Greedy algorithm. Once the data placement problem is solved, the set of content is placed (205) at the caches as computed by the algorithm. The various actions in method 200 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 2 may be omitted.
22. FIG. 3 is a flowchart depicting the Dataplacement algorithm. In a network having content caches 108, content may be obtained from a content server 105 and cached at various content caches 108 in the network. The content that is cached in a content cache 108 can be downloaded by any user 101 in the network at a later point in time. Any request for content is either served locally from a content cache 108, from the content server 105 or from any other content cache 108. If the content is present in the local content cache 108, then the request would be served from the content cache 108 and there would not be any increase in the value of amount of content downloaded onto the content cache 108. If the content is not present in the local content cache 108, then the request would be served from any other content cache 108 or from the content server 105 and there would be a corresponding increase in the value of amount of content downloaded onto the requesting content cache 108. As long as the amount of content uploaded or downloaded are within allowed limits, there would not be any extra charge incurred by the user 101 of the content cache 108. In order to keep the cost of content delivery to content caches 108 at a minimum value, the data placement problem is solved and content is placed in each content cache 108 in such a manner as to service requests from content caches 108 and minimize the cost incurred by users 108 while the request is serviced. The data placement problem can be solved by running the algorithm Dataplacement. In order to run the algorithm Dataplacement, the requirements are first obtained (301) and then the download limits are overcome (302). The download limits for any content cache, , let be the set of objects that has the maximum total request ( ) at . Storing items in minimizes the amount of requests going out of , which may at most be the number of requests going out of in the optimal solution of the ILP. The storage space at is doubled and the extra storage space is used store the item set . The 2nd term in the cost function, equation 1, would be bounded by the corresponding term in the optimum solution of the ILP. Thus the Dataplacement algorithm seeks to minimize the first term in the cost function. Where the cost function is represented by equation 1. In a variant of the ILP any copy of an item serves at most requests. The solution to this problem has identical cost as the original problem. If any solution to the problem stays within storage limits, the solution is also guaranteed to be within upload limits as well or the upload increases by the same factor as the storage. Based on these observations the ILP can be expressed in equation form as

23. The term, min , is the constraint that specifies that each copy of an item can serve only requests. The last constraint, gives the freedom to open multiple copies at a cache. The restriction of is only on the requests served by a content cache 108 to other content caches in the network. A single copy of an item can serve any number of local requests, but no more than external requests. For example, if a content cache 108 has a copy of a movie, then the user 101 associated with the content cache 108 can view the movie any number of times from the content cache 108, however, the movie can be copied onto other content caches 108 in the network only number of times.
24. A rounding technique is performed (303) by using the available constraints and parameters to obtain a solution for the ILP within a bounded approximation factor. The rounding technique may be performed by using algorithm Round. After performing the rounding procedure the set of content to be placed at each content cache 108 would be obtained (304). That is, the values for would be obtained. The set of content are then placed (305) in the specific content caches 108 as computed by the algorithm. The various actions in method 300 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 3 may be omitted.
25. FIG. 4 is a flowchart depicting the algorithm Round. In a network having content caches 108, content may be obtained from a content server 105 and cached at various content caches 108 in the network. The content that is cached in a content cache 108 can be downloaded by any user 101 in the network at a later point in time. Any request for content is either served locally from the content cache 108, from the content server 105 or from any other content cache 108 in the network. If the content is present in the local content cache 108, then the request would be served from the content cache 108 and there would not be any increase in the value of amount of content downloaded onto the content cache 108. If the content is not present in the local content cache 108, then the request would be served from any other content cache 108 or from the content server 105 and there would be a corresponding increase in the value of amount of content downloaded onto the requesting content cache 108. As long as the amount of content uploaded or downloaded are within allowed limits, there would not be any extra charge incurred by the user 101 of the content cache 108. In order to keep the cost of content delivery to content caches 108 at a minimum value, the data placement problem is solved and content is placed in each content cache 108 in such a manner as to service requests from content caches 108 and minimize the cost incurred by users 108 while the request is serviced. The data placement problem can be solved by running the Dataplacement algorithm along with algorithm Round. In order to run the algorithm Round, the requirements are first obtained (401). In algorithm Round, the Generalized Assignment Problem (GAP) can be solved to find a feasible solution ( ) to the following equations

26. Given any fractional feasible solution, integral solution can be found which increases the values by at most 1. The above rounding algorithm can be referred to as . takes the set of parameters for GAP and a fractional feasible solution returns the rounded integral solution . Let be the fractional version of the ILP that relaxes the last constraints to . The requests satisfied locally ( ) do not add to the cost or the upload capacities, but due to reassignment of objects to caches the requests satisfied locally will also start contributing to the cost and capacities of the caches. Thus, it is ensured that the items serving large values of are not moved to other caches. If big copies of any object ( ) are rounded to the closest bigger integer, the extra requests are also bounded by . To round the fractional solution of , any is round (402) to . If it is determined (403) that there is any value of ‘i’ such that then is assigned (404) to , that is . If it is determined (406) that , then every value of contributing to are scaled (408) by a factor . If it is determined (406) that , then all the requests coming to are re-routed (4011) to any open copy of . If it is determined (403) that there is no value of ‘i’ such that then the integral placements of all objects is found (405) by solving the problem, where is the amount of storage occupied by the objects. All the requests coming to are re-routed (407) to the copies opened by GAP. All the remaining demands, , are proportionally assigned (409) to the caches that are already serving the rest of the demands for . For each item , the problem is then solved to obtain (4010) the rounded values. The various actions in method 400 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 4 may be omitted.
27. FIG. 5 is a flowchart depicting the Greedy algorithm. In a network having content caches 108, content may be obtained from a content server 105 and cached at various content caches 108 in the network. The content that is cached in a content cache 108 can be downloaded by any user 101 in the network at a later point in time. Any request for content is either served locally from a content cache 108, from the content server 105 or from any other content cache 108. If the content is present in the local content cache 108, then the request would be served from the content cache 108 and there would not be any increase in the value of amount of content downloaded onto the content cache 108. If the content is not present in the local content cache 108, then the request would be served from any other content cache 108 or from the content server 105 and there would be a corresponding increase in the value of amount of content downloaded onto the requesting content cache 108. As long as the amount of content uploaded or downloaded are within allowed limits, there would not be any extra charge incurred by the user 101 of the content cache 108. In order to keep the cost of content delivery to content caches 108 at a minimum value, the data placement problem is solved and content is placed in each content cache 108 in such a manner as to service requests from content caches 108 and minimize the cost incurred by users 108 while the request is serviced. The data placement problem can be solved by running the algorithm Greedy. In order to run the algorithm Greedy, the requirements are first obtained (501). A ‘utility’ value is assigned (502) to placements of objects to cache locations. If object is placed at cache then all of the local requests of are served for free and there is no increase in the value of amount of content downloaded onto the cache. If is within its upload limit, then can potentially serve the demands of at other locations as well. The utility of a placement of object at cache , ( , ), consists of two parts, namely a local part and a cooperative part. The local part comes from serving the requests locally from cache . Let denote the total demand of object at other caches and denote the total upload limit of . By placing at ,at least amount of this demand can be served from . The request served from one cache to other caches contributes to the cooperative part of the utility. At each iteration a cache-object pair is selected and checked to determine (503) if the cache-object pair gives the maximum value of utility. If the cache-object pair selected does not have the maximum value of utility then a different pair of cache-object is taken (504). Once a cache-object pair with maximum value of utility is obtained, a copy of object is placed (505) at cache . The local copy of object placed at cache serves (506) all local requests at cache . Requests for object from other caches would be served as much as possible from cache as long as the upload limit of cache is within allowed limits. The steps are repeated until there are no more cache-object pairs remaining to be considered. If there are no more cache-object pairs to be considered (507), then the iteration is stopped (508). The iterations solve the data placement problem and at the same time minimize the cost of content delivery to caches. The various actions in method 500 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 5 may be omitted.
28. FIG. 6 is a flowchart depicting a method for clustering caches. In a network having content caches 108, content may be obtained from a content server 105 and cached at various content caches 108 in the network. The content that is cached in a content cache 108 can be downloaded by any user 101 in the network at a later point in time. Any request for content is either served locally from a content cache 108, from the content server 105 or from any other content cache 108. If the content is present in the local content cache 108, then the request would be served from the content cache 108 and there would not be any increase in the value of amount of content downloaded onto the content cache 108. If the content is not present in the local content cache 108, then the request would be served from any other content cache 108 or from the content server 105 and there would be a corresponding increase in the value of amount of content downloaded onto the requesting content cache 108. As long as the amount of content uploaded or downloaded are within allowed limits, there would not be any extra charge incurred by the user 101 of the content cache 108. In order to keep the cost of content delivery to content caches 108 at a minimum value, the data placement problem is solved and content is placed in each content cache 108 in such a manner as to service requests from content caches 108 and minimize the cost incurred by users 108 while the request is serviced. The computation of the ILP to solve the data placement problem can be sped up by clustering content caches 108 before solving the ILP. Content caches 108 are grouped into clusters, the ILP is solved for each cluster separately and the rounding procedure is applied on combined solution of the clusters. While clustering the content caches 108, attempts are made to maximize the intra-cluster cooperation by grouping content caches 108 based on the similarity of the items requested by caches. For example, if two content caches 108 request for the same content, then both the content caches 108 are placed in the same cluster. If content caches 108 with similar requests are put into the same cluster, the content caches 108 will benefit more from cooperating with each other in the cluster. For example, content caches 108 may cooperate with each other in serving common requests.
29. In order to determine the clusters, a pair of content caches 108 are taken (601) in each iteration. The pair of content caches 108 checked (602) to determine if they have maximum similarity. For example, the similarity may be in the requests made by the two content caches 108 and the similarity between content caches 108 and can be defined as the ‘Jaccard’s coefficient’ of their request sets and . Similarity between content caches 108 and = sim(i, j) = .
A parameter ‘k’ is considered while clustering the content caches 108 and ‘k’ denotes the maximum size of any cluster. The pair of content caches 108 having maximum value of similarity are grouped (604) into a cluster. If there are any more content caches 108 to be considered (603), then the steps are performed for the remaining content caches 108. After all the content caches 108 have been considered and grouped into clusters, the ILP is solved (605) for each cluster. Rounding procedure is then applied (606) on the combined clusters. The various actions in method 600 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 6 may be omitted.
30. FIG. 7 is a flowchart depicting a method for initial placement of content. In a network having content caches 108, content may be obtained from a content server 105 and cached at various content caches 108 in the network. The content that is cached in a content cache 108 can be downloaded by any user 101 in the network at a later point in time. Any request for content is either served locally from a content cache 108, from the content server 105 or from any other content cache 108. If the content is present in the local content cache 108, then the request would be served from the content cache 108 and there would not be any increase in the value of amount of content downloaded onto the content cache 108. If the content is not present in the local content cache 108, then the request would be served from any other content cache 108 or from the content server 105 and there would be a corresponding increase in the value of amount of content downloaded onto the requesting content cache 108. As long as the amount of content uploaded or downloaded are within allowed limits, there would not be any extra charge incurred by the user 101 of the content cache 108. In order to keep the cost of content delivery to content caches 108 at a minimum value, the data placement problem is solved and content is placed in each content cache 108 in such a manner as to service requests from content caches 108 and minimize the cost incurred by users 108 while the request is serviced. However, the initial placement of content in content caches 108 also incurs cost. The cost incurred while initial placement of content can be minimized by running algorithm Initialplacement. To run algorithm Initialplacement the requirements are obtained (701). For an object , let denote the set of caches where it needs to be placed. A single copy of object is downloaded (702) from the content server 105. A check is made to determine (703) the content cache 108 having the maximum upload limit and the object is stored (705) in the content cache 108 having the maximum upload limit. The check is performed until all content caches 108 have been (704) considered. The object placed in the content cache 108 can be used to serve requests for from other content caches 108. A check is made through the elements of one at a time and the content caches 108 that contains and has the maximum available upload limit is selected (706). On determining the content cache 108 that contains and has the maximum available upload limit the required object, , is downloaded (708) from the content cache 108. If there is no content cache 108 containing and having the maximum available upload limit, then the object, , is downloaded (707) from the content server 105. The various actions in method 700 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 7 may be omitted.
31. Embodiments described herein can also be conceived as a service provided by ISP’s to web-sites. The service providers have the trust of users 101 and knowledge of the data usage of the users 101. The service provider can act as an intermediary between the web-sites and the users 101. The web-site may give the service provider information about files that can potentially be downloaded by users 101 and the usage statistics of the files. The service provider can incentivize users to set aside some local storage and spare network resources for co-operative caching. Through the co-operative approach, the ISP can offer a value added service to the web-site that allows the web-site to reduce content delivery costs. The ISP also gains by a reduction in its own operational costs. Also, another relevant scenario is content-delivery over a network of “info-stations'''' where users 101 can co-operatively share content and reduce the cost associated with content delivery over the “info-stations''''. An “info-station” may be deployed across a city at road-side lamp-posts, cafes, bus-stations, shops, etc.
32. The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the network elements. The network elements shown in Fig. 1 include blocks which can be at least one of a hardware device, or a combination of hardware device and software module.
33. The embodiment disclosed herein specifies a system and method for reducing the cost of content delivery through a managed network of co-operative caches in the network. Therefore, it is understood that the scope of the protection is extended to such a program and in addition to a computer readable means having a message therein, such computer readable storage means contain program code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in a preferred embodiment through or together with a software program written in e.g. Very high speed integrated circuit Hardware Description Language (VHDL), any other coding language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof, e.g. one processor and two FPGAs. The device may also include means which could be e.g. hardware means like e.g. an ASIC, or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. The embodiments described herein could be implemented in pure hardware or partly in hardware and partly in software. Alternatively, the invention may be implemented on different hardware devices, e.g. using a plurality of CPUs.
34. The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the claims as described herein.

WE CLAIM :-
1. A method for enabling transfer of content in a network, said method comprising steps of:
a user requesting for content from a remote source;
a coordinating module checking if said content is present in any storage device in said network;
said coordinating module checking if constraints related to said storage device affects transfer of said content to said user;
transferring said content from said storage device to said user if constraints related to said storage device does not affect transfer of said content to said user.

2. The method, as claimed in claim 1, wherein said method further comprising steps of:
obtaining said constraints related to said storage devices;
predicting requests for content made by users at periodic intervals of time; and
placing content in said storage device based on said prediction if constraints related to said storage device does not affect transfer of said content to said user.

3. The method, as claimed in claim 2, wherein said content to be placed is obtained by:
performing a rounding technique on parameters; and
obtaining set of content to be placed at each of said content caches.

4. The method, as claimed in claim 3, wherein said rounding is done by algorithm Round.

5. The method, as claimed in claim 3, wherein said parameters include at least one of:
storage limit of said storage devices;

upload limit of said storage devices;

download limit of said storage devices;

price per unit of upload exceeding said upload limit of said storage devices;

price per unit of download exceeding said download limit of said storage devices;
number of said storage devices;
said predicted requests at said storage devices; and
optimum fractional solution of Integer Linear Program (ILP).

6. The method, as claimed in claim 2, wherein said content to be placed is obtained by:
assigning a utility value to placement of said content to each of said storage devices; and
placing a copy of said content in said storage devices based on said utility value.

7. The method, as claimed in claim 1, wherein said method further comprises steps of:
determining similarity between pairs of said storage devices;
combining said storage devices into a cluster based on said similarity; and
transferring said content between said storage devices in said cluster.

8. The method, as claimed in claim 1, wherein said method further comprising steps of:
obtaining said constraints related to storage devices;
downloading a single copy of said content from a content server (105);
placing said downloaded copy of said content in said storage devices based on said constraints; and
serving said requests for said downloaded copy from said storage devices based on said constraints.

9. The method, as claimed in claim 1, wherein said constraints is at least one of:
storage capacity of said storage devices;
upload limit of said storage devices;
download limit of said storage devices;
price per unit of upload exceeding said upload limit of said storage devices;
price per unit of download exceeding said download limit of said storage devices; and
number of local said requests that can be served from a local copy of said content.

10. The method, as claimed in claim 1, wherein said storage device is at least one of:
a computer;
a mobile phone;
Wireless Fidelity (Wi-Fi) access points;
Worldwide Interoperability for Microwave Access (WiMAX) access points; and
public info-stations.

11. The method, as claimed in claim 1, wherein storage capacity at each of said storage device is made at least double of the memory size required by said content.

12. A Coordinator (103) module enabling transfer of content in a network, said Coordinator (103) module comprising at least one means adapted for:
allowing a user to request for content from a remote source;
checking if said content is present in any storage device in said network;
checking if constraints related to said storage device affects transfer of said content to said user;
transferring said content from said storage device to said user if constraints related to said storage device does not affect transfer of said content to said user.

13. The Coordinator (103) module, as claimed in claim 12, wherein said Coordinator (103) module is further adapted to:
obtain said constraints related to said storage devices;
predict requests for content made by users at periodic intervals of time; and
coordinate placing of content in said storage device based on said prediction if constraints related to said storage device does not affect transfer of said content to said user.

14. The Coordinator (103) module, as claimed in claim 12, wherein said Coordinator (103) module is further adapted to:
determine similarity between pairs of said storage devices;
combining said storage devices into a cluster based on said similarity; and
transferring said content between said storage devices in said cluster.

15. The Coordinator (103) module, as claimed in claim 12, wherein said Coordinator (103) module is further adapted to:
obtain said constraints of end-nodes;
download a single copy of said content from a content server (105);
coordinate placing of said downloaded copy of said content in said storage devices based on said constraints; and
serve said requests for said downloaded copy from said storage devices based on said constraints.

Dated 24th September, 2010


Mr. Nishant Kewalramani
Patent Agent

Documents

Application Documents

# Name Date
1 2814-CHE-2010-AbandonedLetter.pdf 2019-12-03
1 Power of Authority.pdf 2011-09-04
2 2814-CHE-2010-FER.pdf 2019-05-31
2 Form-5.pdf 2011-09-04
3 abstract2814-che-2010.jpg 2011-09-04
3 Form-3.pdf 2011-09-04
4 Drawings.pdf 2011-09-04
4 Form-1.pdf 2011-09-04
5 Drawings.pdf 2011-09-04
5 Form-1.pdf 2011-09-04
6 abstract2814-che-2010.jpg 2011-09-04
6 Form-3.pdf 2011-09-04
7 2814-CHE-2010-FER.pdf 2019-05-31
7 Form-5.pdf 2011-09-04
8 2814-CHE-2010-AbandonedLetter.pdf 2019-12-03
8 Power of Authority.pdf 2011-09-04

Search Strategy

1 search_29-05-2019.pdf