Abstract: A system and a method to effectively enhance quality of service of adaptive stream data, the method comprising: Consolidating individual segments or files in a single encapsulated form; Maintaining a buffer and collocating consecutive media files or segments; Pre¬fetching subsequent media files or segments from a remote location; and Maintaining a caching metadata on an intermediate node to access a media file or a segment.
FIELD OF DISLCOSURE
The present disclosure refers to a system and a method for enhancing Quality of Experience of multimedia streaming for HTTP adaptive streams based on network conditions.
BACKGROUND
Media streaming has become indispensable in the context of Internet content deliveries and in many application areas such as, distance learning, video-on-demand and entertainment. Currently, several streaming protocols are commonly being used to deliver media content on Internet, such as HTTP, RTSP/RTP, RTMP, MMS, etc.
Media over HTTP is rapidly becoming one of the most commonly used approaches for media content distribution on the Internet. This is achieved using a mechanism for sending media data/file using HTTP protocol through port 80/8080. This approach has gained a greater significance over other streaming methods because of its ability to reuse the existing internet infrastructure. It also has the ability to use standard HTTP servers and standard HTTP caches to deliver the content. Also, media delivery to end devices is more predictable through HTTP as the firewalls do not require any special configuration to open up a delivery channel. Additionally, most Content Delivery Networks (CDNs) makes use of HTTP to redirect, request and retrieve cached multimedia object and communicate with policy servers. The infrastructure this detailed out is then reused seamlessly.
Media delivery over HTTP is suitable to support on-demand streaming, to provide "anytime" access to media content, allowing client to select and playback on demand. Typically, the delivery follows a Progressive Download and Play method, a mode that allows the client a media file while the some parts of the file are still downloading. The playing starts only after collecting the first part of the media file, which takes a few seconds. A majority of current HTTP streaming on internet is done using this method.
However, the problem encountered in HTTP is that there is no feedback channel to supply the quality and runtime information back to server. Internet connection speeds vary widely and is detrimental on a plethora of conditions. For example, if a user connects to an ISP at 3 Mbps, that does not mean that 3 Mbps of bandwidth is available at all times. Bandwidth can vary, meaning that a 3-Mbps connection may decrease or increase based on current network conditions, causing video quality to fluctuate as well. HTTP Adaptive streaming is has gained thrust and significance to arrest this problem by bringing run-time condition awareness and control to media delivery over HTTP.
HTTP Adaptive Streaming is thus a technology that allows adjusting the quality of a video delivered to a HTTP based media client and lets the adjustment of stream quality delivery be driven by the media player, the media player taking a stock of the impacts of changing network and system conditions.
The espoused technology requires media to be segregated into multiple segments/chunks. Each segment/chunk is a file that contains a part of the video of a specific time period. Segments are fixed in length with the exception of the last segment. Assuming
each segment is 10 seconds long, the video player starts by loading and playing the first segment. As the first segment passes the 5 second mark, the player begins to buffer the second segment and when the first segment completes the player starts playing the second segment. Thus the process continues. If a user wants to jump to, for example, the 36th second in the video, the player starts playing the 3rd segment of the video (i.e. 30th second). The video will start playing at the 30th second which is not the exact location, but pretty close. Each segment of the video is only loaded when viewed or when the previous segment approaches the end. Thus it is candid that a user can skip to a certain part of a media which lends it the semblance of a streaming rather than that of a progressive download, though both structurally and functionally it embarks on progressive download.
Several companies have been developing private HTTP-based media
delivery platform to provide high quality and an adaptive viewing
experience to its customers. Microsoft has implemented its Smooth
Streaming technology which is a web-based adaptive media content
delivery approach that uses standard HTTP [MS-IIS]. Adobe HTTP
Dynamic streaming is a new Adobe-defined delivery method for
enabling on-demand and live adaptive bit-rate video streaming over
regular HTTP connections [Adobe]. Adobe HTTP Dynamic
streaming packages media files into fragments that Flash Player
clients can access instantly without downloading the entire file.
Apple HTTP Live Streaming [Apple] allows to send live or
prerecorded audio and video to iPhone or other devices, such as desktop computers, using an ordinary web server, with support of adaptive bit-rate.
4
While the specified roles of Adaptive Streaming Client and Server provides an improved quality of experience than traditional streaming, a round of pre-emptive enhancement of user experiences is missed out due to insignificant involvement of intermediate serving nodes towards the stream delivery. For example, though HTTP Adaptive Streams utilize existing cache infrastructure the current cache solutions don't understand HTTP Adaptive streams and interrelationship between segments of same stream, instead, the solutions treat these as ordinary HTTP objects. A high level example would be - if a cache is capable of storing 100000 objects of 5 minute size, if these objects are further encoded in adaptive stream of two seconds segments, now the same cache will need to manage 150*100000 segments for caching same media. Considering 4 different quality levels for each segment, the cache will perhaps need to manage up to 60000000 files. Applying the caching policies to provide a better quality of experience to the users becomes enormously difficult, as no single representation exists for the particular content. Instead, the content gets stored as large number of individual files without having any inter¬relationship established. QoE friendly caching policies turns significantly less effective due to this incoherent identification and improper maintenance of content identity. Moreover, as the relationship amongst the streamed files remains un-established, the existing caching methods cannot consider designing an effective storage mechanism for each of the files present in particular HTTP Adaptive Stream. Also, the absence of pre-fetching methods for suitably interworking with Adaptive Streams restricts enhancing the QoE from the intermediate serving node.
In the foregoing were mentioned the Prior arts in associated areas that suggest storing the embedded objects of a webpage in different
files collocating on the disk. These approaches are effective for normal pages, but not equipped to recognize adaptive streaming media and not designed to bring additional QoE enhancement measures from the position of intermediate serving nodes. Prior arts for caching processes on streaming media do not perform optimally for Adaptive Streams, as the nature of streaming has been made different in adaptive streaming technologies. The prior arts defining the Adaptive Streaming technologies focus on defining roles and responsibilities of clients and servers and exclude specifying the improvisations as can be brought in for the intermediating nodes of delivery network. Moreover, some existing caching linked procedures like content pre-fetching, usage measurement, file management and policy based controlling of caches pose challenges while the adaptive stream media are cached without any special handling. The administrators of delivery network infrastructure have no reliable information to automatically relate the individual segment files of Adaptive Streams to the particular media and hence can't make decisions on specific content and measure its usage. This has posed to be a glaring challenge today. Controlling and Management policies can't be predictably enforced over the media content as existing mechanism treats each segment as individual web object/file and doesn't take care of the inter-relationships between them. Existing prior arts describing pre-fetching methods also don't perform optimally for Adaptive Streams as the methods are not appropriately incremented to cater to such streams.
This particular invention addresses these issues and describes the incremental methods as can be drawn over existing mechanisms to serve the Adaptive Streams optimally in the delivery network and bring enhanced quality of experience for the users.
SUMMARY
The Quality of Experience of Multimedia streams have been observed to have deteriorated as the networks conditions have compounded in the past. The HTTP adaptive steaming though has tries to redress it by breaking down the streams into multiple files, in a heavily congested network the problem still persists as has been expounded upon in the background.
The present disclosure teaches various embodiments of the invention wherein a set of methods to achieve enhanced quality of experience for HTTP adaptive streaming media have been detailed, while the said methods are implemented in intermediate serving system(s). Typically, an HTTP adaptive stream will comprise of multiple files, each representing a segment of the particular media. An encapsulation over these files, and establishing the caching method over the singular object that represents the adaptive stream media, introduces an additional level of manageability while caching the adaptive stream contents. The encapsulation and persistence technique invented here leverages upon particulars of Adaptive Stream's design and establishes better accessibility and retrieval capability, along with the ability to cater to the segments/files while these are distributed in local and remote systems for the same media.
The encapsulation approach assist maintaining the identity of the content, and facilitate the implementation of caching policies over the identified content, targeted for improving user experiences.
Further, methods to monitor and measure the media usage and network bandwidth consumption enable setting up a pre-fetching mechanism suitable for adaptive streams. The pre-fetching method relies on sequential segmentation of Adaptive Stream media, observes the quality of the currently serving segments and the current bandwidth consumption on the serving channel, and establishes the pre-fetching decisions based on the same.
This invention describes the incremental methods that can be adopted over such existing modes of working of intermediate serving systems and can be architected to bring QoE enhancement measures for HTTP Adaptive Streams.
The persistence methods for better accessibility, along with methods for content pre-fetching, improve delivery capability in the network for HTTP Adaptive Streams, resulting to improved quality of experience for the particular streams.
BRIEF DESCRIPTION OF DRAWINGS:
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and components.
Figure 1: illustrates a diagram representation for network deployment.
Figure 2: illustrates a block diagram representation for an internal cache meta-data structure.
Figure 3: illustrates a block diagram for segment header structure of individual segments.
Figure 4: illustrates a file encapsulation structure in persistent store.
Figure5: illustrates an internal buffer structure before delayed writing into persistent store.
Figure 6: illustrates working of file encapsulation over distributed environment.
Figure 7: illustrates a flow diagram for sorting the adaptive stream segments in cache of intermediate serving system.
Figure 8: illustrates a flow diagrammatic representation for pre¬fetching adaptive stream segments at intermediate serving stations.
DETAILED DESCRIPTION:
The following discussion provides a brief, general description of a suitable environment in which various embodiments of the present disclosure can be implemented. The aspects and embodiments are described in the general context of computer executable mechanisms such as routines executed by a general purpose computer e.g. a server or personal computer. The embodiments described herein can be practiced with other system configurations, including internet appliances, hand held devices, multi-processor systems, microprocessor based or programmable consumer
electronics, network PCs, mini computers, mainframe computers
and the like. The embodiments can be embodied in a special
purpose computer or data processor that is specifically
programmed configured or constructed to perform one or more of the computer executable mechanisms explained in detail below.
Figure 1 illustrates a simplistic view of a representative network deployment. The consumer devices in (101) are shown to be connected with the interconnected layers of proxies/gateways (102). Typically (102) represents the area of interconnected serving nodes that also act as intermediate content ingestion points and get equipped with cache servers. Actual content preparation and ingestion occurs from the area of content servers (103) in the figure. A transitioning network is configured to transparently forward the content towards (102). The incremental methods discussed in this particular invention will be positioned in the (102) which represents the access side, edge, aggregation and core network divisions of the delivering network.
In this denned positioning, the caching mechanisms are implemented in either localized or distributed mode. Distributed caching deployment opens up possibilities of different collaborative establishments, leading to concurrent and effective utilization of systems and network resources among the cooperating cache servers. The collaboration further may be established in a peer to peer setup, or hierarchical, or in hybrid mode.
To offer the desired benefits for Adaptive Streams, system must identify if the requested data represents the adaptive stream. There are different ways to determine if the video distribution is utilizing HTTP Adaptive streaming technology. The ways include recognizing
it from URL or from the http headers (User Agent /content type) in HTTP Request/ response, parsing the adaptive stream specific Manifest file, or using Deep Packet Inspection to scrutinize the media being delivered. The system can use any of these ways to identify the media, as being distributed through the adaptive stream flow.
Once the adaptive stream media is identified, the segments are needed to be associated with the help of the information present in manifest files and from requesting URL patterns. Depending on the implementation of specific Adaptive Streaming technology by the particular vendor, the system gathers information on the particular media, details on the number of segments, duration of segments, and supporting quality levels for the particular media. These pieces of information are exchanged between media client and content server during the handshaking, and keeping note of these in the intermediate serving node helps the caching design become more aware of the adaptive stream. Most of the information is generally present in the Manifest file, and a parsing method extracts out the data required. However, since the Manifest file is proprietary to the vendor's implementation of Adaptive Streaming, the parsing method thus will also need to maintain proprietary implementation specific supporting versions. Once Manifest file parsing is accomplished, the subsequent requests for different segments of the adaptive stream media arrive at the intermediate node. The system requires identifying these subsequent URLs and associates these into the original media. While parsing the URLs, the system breaks the complete request into a base media request and subsequent parts for incremented fragments and its specific quality levels. The association gets established by mapping the subsequent URLs with the base URL and progression of the requested segments are
tracked by detecting the increments. Upon receiving the response, the segment data is mapped corresponding to the segment identifier
Figure 2 shows a representative design of meta-data structure that can enclose the Adaptive Stream media information along with its association with segments of different quality levels.
201 shows a hash map structure that stores unique media identifier and associates with the memory location of the stored media. This is typically the implementation that every cache server maintains, and the precise design of this structure will vary from one implementation to another. In this case, a representative design would be to have the base URL as the content identifier in a hash map. However, the values of the map for the base URL of an Adaptive Stream will need to point to further extension of a meta¬data structure, unlike general implementations for all other contents.202 shows representative design of the extended meta¬data structure, required to maintain the inter-segment relationship and containment of Adaptive Stream media segments. The extended meta-data structure holds the segment identifier and point to the list of different quality levels of same segment. Primary considerations for designing the extended meta-data structure are that the number of segments can be large for many Adaptive Stream media, and each of the segments requires maintaining an association with its previous and next segment. Thus, the structure must be able to scale up, should provide easy previous to next navigational facility and should also provide faster searching time. Keeping the considerations in mind, the extended meta-data structure is modeled as an AVL tree in the representative design. Each of the elements of the AVL tree contains a pointer to a list of different quality levels of same segment, as can be seen in the '203'
marked area of the figure. Considering the number of quality levels supported in Adaptive Stream media is generally less (around 6 to 10 quality levels in most existing vendor specific implementations), a list is maintained to contain the segments of different quality levels in the extended meta-data structure. Each of the elements of this list contains the Segment Header Structure, and also contains a pointer to an information log that maintains the usage measurement details for that particular segment of specific quality level.
Figure 3 explains the attributes required in Segment Header Structure to establish complete interoperability with local and distributed mode of caching architecture. The segment header structure is uniquely identified using the segment URL. And a flag is kept to describe if the segment is cached locally or in remote systems. If the segment is cached locally, an attribute in the segment header structure points to the position of the encapsulated file that contains all the segment data for the particular Adaptive Stream media. Further, an offset attribute identifies the position of the particular segment in the encapsulated file.
Figure 4 offers a visual depiction of the file encapsulation and its containment of the media segments. The file encapsulation and sequential storage of the segments prove beneficial as most of the disk I/O modules available in the industry anticipate subsequent reading of collocated memory and thus read the contents from specified address as well as from the consecutive location. Thus, keeping the related segments collocated in the file encapsulation structure helps leveraging the disk I/O capabilities and expedites the segment serving time. Another important consideration behind the design of file encapsulation is that it requires to be codec
agnostic. Considering the positioning of these methods being in an intermediate serving node, the processing requirements related to encoding, decoding, and re-encoding are to be avoided, yet the encapsulation is to be kept in place for a singular representation. Thus, the encapsulated file over the adaptive media segments are not playable as is in any media player, but the intermediate serving system understands the method of serving the segments from this encapsulated file, which is playable by the client media players receiving the segments.
When the segment is stored in a remote system, the segment header structure specifies the IP and port of the remote system where this particular segment is located. Depending on the underlying implementation of distributed cache environment, the cache server residing in this system communicates with the remote cache server at given IP and port, and requests for the media having the specific identifier. The protocol used underlying can vary based on the particulars of the distribution setup, and this representative design does not make any assumption of having all these methods implemented in the remote system. The design will interwork seamlessly in either case, if the remote system implements the discussed methods or even if the remote system doesn't. Thus, the local cache requests for passing on the cached content from the remote system, being agnostic of the underlying intricacies of cache interoperability protocol and data exchange mechanism. In-case the IP, port and media identifier do not identify the cached location of media in the remote system, an in-memory pointer attribute is provisioned in the segment header structure to hold additional level of details corresponding to the particular remote system.
A provision for protocol identifying attribute for interoperability is currently reserved for future use, so as to support setting up distributed environment specifically within the scope of managed Adaptive Stream cache extension, if appears necessary for certain kind of futuristic implementations.
Figure 5 shows a visual depiction of the buffer wherein the segment data is not promptly written into the disk, as a delayed writing mechanism is followed for optimized disk I/O performance. A memory buffer instance is maintained for each of the Adaptive Stream media and length of the buffer is decided on the basis of length of the adaptive stream media and the support available from underlying platform and file system. The segment data gets stored in the buffer till a defined delayed writing threshold is reached, and then the segment data available in the buffer is written into the file encapsulation structure into the disk.
Figure 6 shows the interoperability design as being supported for distributed caching of Adaptive Stream media segments, as elaborated previously.
Due to the incremental nature of this invention over existing methods, the usage measurement particulars as stored corresponding to each quality level of the segment, as elaborated in (203) are not hard-wired here. The measurement attributes for the segments pointed in the extended meta-data will need to be similar to the attributes measured for all other media in the existing caching implementation. However, the approach for storing the information at this level is designed, keeping the considerations on avoiding repetitive data storage and maintaining a sufficient level of
granularity during information collection for adaptive stream segments. The information maintained here can be aggregated to any higher level, making the usage measurement visible for the aggregation level. For example, usage measurements for the particular adaptive stream media can be computed by aggregating the granular level of information maintained for each segment's specific quality level. Also, a report can be prepared from these pieces of information, to highlight the comparative popularity of particular segments of the media. In addition, reports on percentage breakup of segments served from local cache and remote caches can assist the distributed environment on deciding the content/segment movement between cache servers for more effectiveness.
Some of the parameters measured here also contribute to effective controlling of cached content. Decisions on caching a particular Adaptive Stream media or removing the Adaptive Stream media from the cache are driven by the policies used by existing caching methods. And the extension for Adaptive Stream cache gets aligned with these underlying cache-controlling policies as implemented. The extended part of Adaptive Stream Cache also exposes design interfaces through which the cache-controlling policies can be applied on segment level as well.
Figure 7 shows the algorithm followed for storing the adaptive stream segments in cache. For a non-cached adaptive stream media, the first request passes on information on important attributes, as retrieved from the Manifest files. Upon retrieving these pieces of information, the meta-data structure is initialized. Subsequently, for every request for next segment, the segment is retrieved from the remote cache or content server, and an
evaluation is made whether to store this segment in local cache or not. The evaluation is done as per the caching policy followed in the existing cache implementation. The segment's particular header structure is formed, and its positioning is identified in the adaptive stream's meta-data structure.
Figure 8 explains the algorithm followed to conclude on pre¬fetching the segments, as also elaborated herein. Generally, the pre-fetching is invoked as a background task, or it can as well be invoked within the same processing cycles used for serving the Adaptive Stream media, using HTTP pipelining or similar mechanisms. The invoking mechanism should follow the existing approach implemented in the cache and establish coherence in extended environment.
To pre-fetch the segments of a serving Adaptive Stream media, the system requires knowing the index of currently serving segment and its quality level as being served. In HTTP Adaptive Streaming, the segments are served in sequence, however, the serving decision of the particular quality level of the segments are driven from the client. If the client senses delays/buffering/drops/jitters in current viewing condition, it switches to a lower quality segment to attain a better viewing experience. The extended caching method for Adaptive Streams, being positioned in intermediate serving system, is also capable of monitoring the bandwidth consumption happening in the particular channel through which the Adaptive Streaming media is being served. If the monitoring method shows a steady consumption happening in the particular channel, the information can be fed back to the pre-fetching unit and subsequent segments of same quality level should be pre-fetched for further consumption. If the monitoring method senses a
deteriorating consumption, the likeliness of switching to lower quality segments will be higher. In such cases, the pre-fetching unit should pre-fetch lesser number of segments but with quality levels both at currently serving level and also a level inferior to current serving level. To decide on the number of segments to be perfetched at one go, the primary design considerations should be to comprehend the maximum processing capacity of the intermediate system, currently operating capacity, available memory, bandwidth fluctuations observed in particular channel and the recommendation of usage of existing pre-fetching mechanism of underlying caching system. In scenarios where bandwidth fluctuations are higher, pre-fetching the segments for lesser duration but with different quality levels will serve better, in terms of achieving a better hit ratio for pre-fetched content and also reducing the processing wastage.
There are several methods to monitor bandwidth consumption in the particular channel, and suitability of the method depends on the role being played by the intermediate serving system in the overall network architecture. Thus the invention does not attempt to fix the method of sensing the bandwidth consumption pattern in this context. However, to accomplish a representative design and maintaining the generic nature of the implementation, a monitoring method can be designed that oversees the congestion window, RTT and MTU of underlying TCP module of the particular channel and continue sensing the consumption state throughout the channel's existence. A feedback mechanism established from this module to the pre-fetching unit completes the implementation of the representative design.
1
The present invention is not to be limited in scope by the specific embodiments and examples which are intended as illustrations of a number of aspects of the invention and all embodiments which are functionally equivalent are within the scope of this invention. Those skilled in the art will know, or will be able to ascertain using no more than routine experimentation, many equivalents to the specific embodiments of the invention described herein. These and all other equivalents are intended to be encompassed by the following claims.
WE CLAIM:
l.A method to effectively enhance quality of service of adaptive stream data , the method comprising :
Consolidating individual segments or files in a single
encapsulated form;
Maintaining a buffer and collocating consecutive media files or
segments;
Pre-fetching subsequent media files or segments from a remote
location; and
Maintaining a caching metadata on an intermediate node to
access a media file or a segment.
2. A method as claimed in claim 1 wherein the adaptive stream data is composed of many individual segments or files.
3. A method as claimed in claim 1, wherein the step of
consolidating the individual segments/files in a single encapsulated form involves establishing a consolidating procedure for segments/files available within the same system or distributed across multiple systems, into a single encapsulation.
4. A method as claimed in claim 1, wherein the step of
consolidating the individual segments/files in a single encapsulated form comprises collocating the segments/files available within the same system to facilitate accessibility and sequential access.
5.A method as claimed in claim 1, wherein the step of maintain a buffer and collocating consecutive media files/segments
involves optimizing disk read/write time and reducing hard disk seek time by fixing a threshold limit to a certain value once the buffer size reaches a threshold limit.
6.A method as claimed in claim 4, wherein the threshold limit is fixed in accordance with the limit of toleration for heavy load conditions and the caching application can withstand the conditions.
7.A method as claimed in claim 1, wherein pre-fetching subsequent media files or segments from a remote location is done in real time, the method comprising :
identifying media segments to pre-fetch; determining the number of segments to pre-fetch.
8.A method as claimed in claim 6, wherein the step of identification of media segments to pre-fetch involves identification of relationship between different segments in the adaptive stream media and pre-fetch the next segment as per the relationship defined.
9.A method as claimed in claim 6, wherein the step of identification of media segments to pre-fetch comprises a network bandwidth which entails a mechanism that tracks the amount of bandwidth consumed by an end user at run time.
10.A method as claimed in claim 6, further determining whether the segments received are of the same or of a different quality.
11. A method as claimed in claim 6,wherein a request for adaptive stream data is pre-fetched from the remote location it has not been previously cached anywhere.
12. A method as claimed in claim 1, wherein the step of maintaining a caching metadata on an intermediate node to access a media file or a segment comprises designing a caching metadata to provide interfaces and methods to access segments/files from a remote location.
13. A method to effectively enhance quality of service of adaptive stream data, the, method collecting, consolidating and maintaining usage and access information of the adaptive stream data.
14. An intermediate serving system to effectively enhance quality of service of adaptive stream data , the system in a network deployment configured to :
Consolidating individual segments or files in a single
encapsulated form;
Maintaining a buffer and collocating consecutive media files
or segments;
Pre-fetching subsequent media files or segments from a
remote location;
Maintaining a caching metadata on an intermediate node to
access a media file or a segment.
15.An intermediate serving system as claimed in claim 13, wherein the system in the network is configured to
consolidating the individual segments/files available within the same system or distributed across multiple systems, in a single encapsulated form involves establishing a consolidating procedure for segments/files into a single encapsulation.
16 .An intermediate serving system as claimed in claim 13, wherein the system in the network is configured to consolidating the individual segments/files in a single encapsulated form comprises collocating the segments/files available within the same system to facilitate accessibility and sequential access.
17 .An intermediate serving system as claimed in claim 13, wherein the system in the network is configured to maintain a buffer and collocating consecutive media files/segments involves optimizing disk read/write time and reducing hard disk seek time by fixing a threshold limit to a certain value once the buffer size reaches a threshold limit
18.An intermediate serving system as claimed in claim 13, wherein the system in the network is configured to fix a threshold limit in accordance with the limit of toleration for heavy load conditions and the caching application can withstand the conditions.
19 .An intermediate serving system as claimed in claim 13, wherein the system in the network is configured to pre-fetching subsequent media files or segments from a remote location is done in real time, the system deployed for :
identifying media segments to pre-fetch;
determining the number of segments to pre-fetch.
20.An intermediate serving system as claimed in claim 18, wherein the system in the network is configured to identify media segments and to pre-fetch involves identification of relationship between different segments in the adaptive stream media and pre-fetch the next segment as per the relationship defined.
21 .An intermediate serving system as claimed in claim 18, wherein the system in the network is configured to identify media segments to pre-fetch comprises a network bandwidth which entails a mechanism that tracks the amount of bandwidth consumed by an end user at run time.
22.An intermediate serving system as claimed in claim 18, wherein the system in the network is configured to determine whether the segments received are of the same or of a different quality.
23.An intermediate serving system as claimed in claim 18, wherein the system in the network is configured to prefect a request for adaptive stream data from the remote location it has not been previously cached anywhere.
24.An intermediate serving system as claimed in claim 13, wherein the system in the network is configured to maintain a caching metadata on an intermediate node to access a media file or a segment comprises designing a caching metadata to
provide interfaces and methods to access segments/files from a remote location.
25.An intermediate serving system as claimed in claim 13, wherein the system in the network is configured to effectively enhance quality of service of adaptive stream data, the, method collecting, consolidating and maintaining usage and access information of the adaptive stream data.
Dated this day of June 2011
Of Anand and Anand Advocates Agents of the Applicant
| # | Name | Date |
|---|---|---|
| 1 | 2145-CHE-2011 FORM-9 30-06-2011.pdf | 2011-06-30 |
| 1 | 2145-CHE-2011-AbandonedLetter.pdf | 2018-05-30 |
| 2 | 2145-CHE-2011 FORM-18 30-06-2011.pdf | 2011-06-30 |
| 2 | 2145-CHE-2011-FER.pdf | 2017-11-22 |
| 3 | 2145-CHE-2011 FORM-1 23-12-2011.pdf | 2011-12-23 |
| 3 | Form-3.pdf | 2011-09-04 |
| 4 | 2145-CHE-2011 CORRESPONDENCE OTHERS 23-12-2011.pdf | 2011-12-23 |
| 4 | Form-1.pdf | 2011-09-04 |
| 5 | 2145-CHE-2011 POWER OF ATTORNEY 15-12-2011.pdf | 2011-12-15 |
| 5 | 2145-CHE-2011 CORRESPONDENCE OTHERS 15-12-2011.pdf | 2011-12-15 |
| 6 | 2145-CHE-2011 CORRESPONDENCE OTHERS 15-12-2011.pdf | 2011-12-15 |
| 6 | 2145-CHE-2011 POWER OF ATTORNEY 15-12-2011.pdf | 2011-12-15 |
| 7 | 2145-CHE-2011 CORRESPONDENCE OTHERS 23-12-2011.pdf | 2011-12-23 |
| 7 | Form-1.pdf | 2011-09-04 |
| 8 | 2145-CHE-2011 FORM-1 23-12-2011.pdf | 2011-12-23 |
| 8 | Form-3.pdf | 2011-09-04 |
| 9 | 2145-CHE-2011 FORM-18 30-06-2011.pdf | 2011-06-30 |
| 9 | 2145-CHE-2011-FER.pdf | 2017-11-22 |
| 10 | 2145-CHE-2011-AbandonedLetter.pdf | 2018-05-30 |
| 10 | 2145-CHE-2011 FORM-9 30-06-2011.pdf | 2011-06-30 |
| 1 | SEARCH_STRATEGY_2145CHE2011_28-09-2017.pdf |