Abstract: A method of splicing a first data stream and a second data stream is disclosed. The method comprising detecting a splice in-point and a splice out-point within the first data stream, wherein the splice in-point is a frame in the first data stream and the splice out-point is another frame in the first data stream. The method further identifies splice-in opportunity point and a splice-out opportunity point from a plurality of splice opportunity points (SOPs) in the first data stream, wherein the splice-in opportunity point corresponds to the splice in-point and the splice-out opportunity point corresponds to the splice out-point, wherein the splice-in opportunity point and the splice-out opportunity point are frames of the first data stream. The method replaces at least one frame of the first data stream with at least one frame of the second data stream, wherein the replacement of the frames of the first data stream starts onwards the splice-in opportunity point and ends at the splice-out opportunity point; and wherein the first data stream includes a plurality of frames and the second data stream includes a plurality of frames. The method performs the splicing operation without de-packetization of the first data stream. In addition, the replacement of the frames does not require re-ordering of the frames of the first data stream.
FORM 2
THE PATENTS ACT 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
PROVISIONALSPECIFICATION (Refer section 10 and Rule 13)
1.1 TITLE OF THE INVENTION:
"METHOD AND APPARATUS FOR SPLICING A COMPRESSED DATA
STREAM"
2i APPLICANT:
1 i a) Name
b) Nationality
c) Address
VUBITES INDIA PRIVATE LIMITED
Indian
1st Floor, Mahalaxmi Engg Estate (Annexe),
1st L.J. Cross Road, Mahim (w)
Mumbai-400 016
3I PREAMBLE TO THE DESCRIPTION
The following specification particularly describes the invention.
METHOD AND APPARATUS FOR SPLICING A COMPRESSED
DATA STREAM
BACKGROUND OF THE INVENTION
Fie|d of the Invention
[oooli] Embodiments of the present invention generally relate to digital
professing techniques, and more particularly to a method and apparatus for splicing a compressed data stream in a digital transmission system.
Description of the Related Art
joooft Within a communication system, a network bandwidth plays an
important role during the transmission of multimedia signals. Various multimedia services, such as, broadcasting services, video on demand services, video conferencing and/or the like generate the multimedia signals. Generally, the multimedia signal is of extremely large bandwidth and occupies substantial part of the network bandwidth. Accordingly, various compression standards, such as, Moving Picture Expert Group (e.g., MPEG-1, MPEG-2), H.26X and/or the like are developed to reduce the network bandwidth requirement of the multimedia signal. Such standards define compression and/or coding techniques for generating a compressed multimedia signal The compressed multimedia signal generally includes coded frames, such as, video frames, audio frames, data frames and/or the like.
[0003] Various digital systems have been developed for processing the compressed multimedia signal. The compression of the multimedia signal has inadvertently increased the complexity of such digital systems. For example, in a processing operation (e.g., a splicing operation) the system may need to decode par(ly and/or wholly coded frames of compressed multimedia signal and thereby perform the splicing operation. Additionally, on completion of the processing operation, the digital system re-encodes the one or more decoded frames of compressed multimedia signal using the compression standards. Such decoding and re-encoding affects the quality of the compressed multimedia signal and
requires high end processing units having large computational power and thus decreases the system efficiency.
[0004] Furthermore, during processing, various streams such as, video streams, audio streams and/or the like of the multimedia signal may lose timing synchronization among each other. In a conventional method, the digital system uses a hardware clock and makes its hardware clock as a slave clock to one or more stream clocks to achieve timing synchronization among the one or more strejams. However, the hardware clock requirement may add complexity to the digital system.
[0005] Therefore, there is a need in the art for a method and apparatus for efficiently splicing the compressed data stream and achieving the timing synchronization among one or more streams.
SUMMARY OF THE INVENTION
[0006] A method and apparatus for splicing a first compressed data stream by
a second compressed data stream is disclosed. The method comprising detecting a splice in-point and a splice out-point within the first compressed data streiam. The method replaces one or more compressed frame of the first compressed data stream with one or more compressed frame of the second conjipressed data stream and the replacement starts at the first SOP. The method further achieves a lip-synchronization among the one or more streams of the first compressed data stream by altering Presentation Time Stamp (PTS) values, Decoding Time Stamp (DTS) values, and Program Clock Reference (PQR) values second compressed data stream.
BRIEF DESCRIPTION OF THE DRAWINGS
[000|7] So that the manner in which the above recited features of the present
invention can be understood in detail, a more particular description of the
invention, briefly summarized above, may be had by reference to embodiments,
some of which are illustrated in the appended drawings. It is to be noted,
however, that the appended drawings illustrate only typical embodiments of this
invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
[000BJ Figure 1 illustrates a block diagram of an example data stream processing system in accordance with one or more embodiments of the present invention;
[000B] Figure 2 illustrates a block diagram of an example embodiment of a splicer in accordance with one or more embodiments of the present invention; and
[0010] Figure 3A-3E illustrate an example splicing operation on a data stream in accordance with one or more embodiments of the present invention.
DETAILED DESCRIPTION
[00111] Figure 1 illustrates a block diagram of an example data stream
processing system 100 in accordance with one or more embodiments of the present invention. The system 100 provides broadcasting of multimedia services, such as digital television broadcasting, video on demand broadcasting and/or the like; to a user. In one embodiment, the system 100 may provide an integrated digital service, such as an internet and/or an intranet access to the user. The data stream processing system 100 includes a plurality of source stations communicably coupled with a plurality of processing stations through a network 124.
[00112] The plurality of source stations, such as a source station 102i, a source
station 1022, a source station 1023 and a source station 102n, hereinafter
referred to as the source stations 102. Generally, the source station 102 is a broadcasting station, such as, a television broadcasting station. Additionally, the source station 102 is a digital broadcasting station and thereby transmits a digital data stream to the plurality of processing stations 112. The plurality of processing stations, such as a processing station 112i, a processing station 1122, processing
station 1123 and a processing station 112n, hereinafter referred to as the
processing stations 112.
[oo13] Generally, the source station 102 generates a compressed data stream using a program stream and thereby transmits the compressed data stream to the: processing stations 112. The program stream generally is a multimedia stream and includes a video stream having video frames, one or more audio streams having audio frames and an associated data stream having data frames. In one embodiment, the source station 102 may receive the program stream from a production studio. Additionally, the production studio may be a mobile production studio adapted for covering entertainment events, such as news, live matches, conferences and/or the like.
[0014] As an example and not as limitation the source station 102 processes the;program stream using compression technologies, such as, JPEG, MPEG (e.g., MPEG-1, MPEG-2, and MPEG-4), H.26X and/or the like, and thereby genjerates the compressed data stream. The compressed data stream thus generated from one program stream may also be referred to as single program data stream (SPTS). Additionally, the source station 102 multiplexes one or more program streams and generates the compressed data stream. The compressed datfc stream thus generated from one or more program streams may also be referred to as multi program transport stream (MPTS).
[OO15]j In one embodiment, the source station 102 generates an MPEG-2 compliant compressed data stream. Accordingly, the source station 102 compresses the program stream frames, such as, the video frames, the audio frames and/or the like in accordance with the MPEG-2 compression standard. For; example, the video frame of the program stream may be compressed as an intra coded frame (I-frame), a predictive frame (P-frame) or as a bidirectional frartie (B-frame).
[0016] Generally, the l-frame, being an independent compressed video frame, eliminates spatial redundancies within the video frame and thereby does not
need another video frames while decoding. However, the P-frame and the B-franme are dependent compressed video frames and may need l-frame and/or P-frame while decoding. Additionally, the P-frame eliminates temporal redundancies with respect to a preceding compressed video frame. The preceding compressed frame may be the l-frame or the P-frame. Accordingly, the B-fame eliminates temporal redundancies with respect to the preceding compressed video frame and a future compressed video frame. The preceding compressed frame and/or the future compressed frame may be the l-frame and/or the P-frame.
[ooif] Subsequently, the source station 102 generates a video sequence usinig the compressed video frames. The video sequence begins with a sequence header and followed by one or more sets of compressed video frames. Additionally, the MPEG-2 standards define a group of pictures (GOP) having one or more compressed video frames of the video stream. The GOP begins with the l-frame, and followed by a certain number of the B-frames and the P-frames. Accordingly, the source station 102 generates an elementary video stream (ES) using the video sequence and the GOP. Accordingly, the source station 102 proqesses the one or more audio streams and the associated data stream (e.g., program clock reference data stream) of the program stream and thus generates corresponding elementary streams. Thus the source station 102 generates the one or more elementary streams from the program stream. Furthermore, source station 102 generates a packetized elementary stream (PES) for each elementary stream
[001$] The PES includes PES packets having a PES packet header and a data payload. The PES packet header includes stream identification (SID) for identifying the one or more elementary streams of the program stream. Additionally, each PES packet header includes timestamps known as presentation timestamp (PTS) and decoding timestamp (DTS). Further, the souitce station 102 multiplexes several PESs having a common time-base called PCR and further packetizes into transport stream (TS) packets, and thus
generates the SPTS. Additionally and/or alternatively, the source station 102 multiplexes one or more SPTS and thus generates the MPTS and subsequently transmits the TS to the processing station 112 through the network 124. The compressed data stream, such as the SPTS generally includes a plurality of data packets. Additionally, one or more compressed frames of the compressed data stream may spread across one or more data packets. Alternatively, the data packet may include one or more compressed frames.
{001 jq The network 124 comprises a communication system that connects one or rfiore communicable devices such as, the source station 102, the processing station 112 and/or the like, by a wire, a cable, a fibQr optic and/or a wireless link (e.g., a satellite link) facilitated by various types of well-known network elements, such as hubs, switches, routers, and the like. The network 124 may employ various well-known protocols to communicate information amongst the network resources. For example, the network 124 may be a part of the fnternet or intranet using various transmission systems such as Broadcast transmission systems, which employs various modulation techniques, various interfaces (e.g., Asynchronous Serial Interface (ASI)), transmission means (e.g., RF cables, Optical fibers, Satellite Links) and/or the like. Alternatively, the network 124 may be a part of an Internet protocol network on Ethernet, Wi-Fi or fiber or dedicated lines, ATM networks etc.
[0020] According to one or more embodiments, the plurality of processing stations 112 may be located in a plurality of different geographic locations. Each of the processing stations 112 is configured to receive the compressed data strejams generated by the source stations 102. The processing station 112 processes the received compressed data stream from the source station 102. In one embodiment, the processing station 112 provides synchronization among the video stream and the audio stream of the received compressed data stream.
[00?1] The system 100 further includes a plurality of receiver and a plurality of splicers. The plurality of receiver 114, such as, a receiver 114r a receiver 1142, a
receiver 1143 , and a receiver 114n, hereinafter referred to as the receiver
114i and the receiver 114 is associated with the processing station 112. The
plurality of splicers, such as, a splicer 116i, a splicer 1162, a splicer 1163 ,
and! a splicer 116n, hereinafter referred to as the splicer 116 and the splicer 116 is associated with the processing station 112.
[0022] The receiver 114 receives the compressed data stream and communicates the compressed data stream to the splicer 116. According to one or rhore embodiments of the invention, the processing station 112 may use a digital Integrated Receiver Decoder (IRD) device for communicating the received compressed data stream to the splicer 116. Alternatively, the processing station 112! may use an analog IRD device as the receiver 114 and thereby encodes the received stream as the compressed data stream and thus communicates the compressed data stream to the splicer 116. In some embodiments, the receiver 114 achieves demodulation and/or descrambling of the compressed data stream.
[002(3] The splicer 116 achieves the splicing operation on the received compressed data stream using a second compressed data stream. For example, the splicer 116 may replaces one or more compressed frames such as the video frame, audio frame and/or the like of the received SPTS with the one or more compressed frames of the other SPTS. Generally, the source station 102 provides the second compressed data stream. Optionally, the system 100 includes a stream generator 120 for generating the second compressed data stream. The stream generator 120 may communicate with the source station 102 for determining splicing points and thereby communicating the determined splicing points to the processing station 112 via the network 124. Additionally, the stream generator 120 may communicate with the source station 102 for an authorization of the contents of the second compressed data stream.
[0024] The system 100 further includes a plurality of transmitter, such as, a
transmitter 1181, a transmitter 1182, a transmitter 1183 and a transmitter
118m hereinafter referred to as the transmitter 118. The transmitter 118 is
associated with the processing station 112. The splicer 116 forwards the spliced compressed data stream to the transmitter 118. Subsequently, the transmitter 118transmits the spliced compressed data stream to a network 126.
[002ft] The network 126 comprises a communication system that connects computers by wire, cable, fiber optic and/or wireless link facilitated by various types of weil-known network elements, such as hubs, switches, routers, and the like. The network 126 may employ various well-known protocols to communicate information amongst the network resources. For example, the network 126 may be a part of the internet or Intranet using various transmission systems such as Broadcast transmission systems, which employs various modulation techniques, various interfaces (e.g., Asynchronous Serial Interface (ASI)), transmission means (e.g., RF cables, Optical fibers, Satellite Links) and/or the like. Alternatively, the network 126 may be a part of an Internet protocol network on Ethernet, Wi-Fi or fiber or dedicated lines, ATM networks etc.
[0026] Generally, the source station 102 inserts one or more advertisements within the program stream and thereby, generates the compressed data stream. Further, the compressed data stream is communicated to the processing station 112. As an example and not as a limitation, the processing station 112 is a cable head end in a digital transmission system. Accordingly, the processing station 112 performs various operations on the compressed data stream. For example, the processing station 112 performs splicing of the compressed data stream with the second compressed data stream.
[0027] According to one or more embodiments of the invention, the second compressed data stream includes one or more advertisement data streams. The advertisement data streams include one or more advertisements that may replace the one or more advertisements of the compressed data stream. In one embodiment, the user can access an advertisement planner tool for scheduling the lone or more advertisements in the advertisement data streams. For example, the advertisement planner provides a list of one or more inventory spots within
the compressed data stream. Inventory spots refer to slots in a broadcast stream available for inserting an advertisement data stream.
[0026] In one embodiment, the advertisement planner tool is the advertisement planner disclosed in the provisional patent application titled "Method and apparatus for planning a schedule of multimedia advertisements in a broadcasting channel" being co-filed with this application. Accordingly, the user selects the one or more inventory spots and assigns the advertisement data stream to the selected inventory spots. Subsequently, the splicer 116 performs splicing of the compressed data stream with the second compressed data stream at the one or more inventory spots.
[002*] According to one or more embodiments of the invention, the user accesses an advertisement generator tool for generating the advertisement data stre&m. The advertisement generator includes various computing resources, such as, hardware resources, software resources, multimedia data (e.g., audio data, video data and/or the (ike) and/or the like and thereby, enables the user to generate the advertisement data stream. In one embodiment, the advertisement generator tool is the advertisement generator disclosed in the provisional patent application titled "Method and apparatus for generating a multimedia advertisement" being co-filed with this application.
[0030] Additionally, the processing station 112 is configured to communicate witr) a plurality of devices, such as, a device132j, a device 1322i a device 1323
>, and a device 132n, hereinafter referred to as the device 132, via the
network 126.
[0031J The device 132 may be a desktop computer, a laptop, a mobile phone, a Personal Digital Assistant (PDA), a digital television, a set-top box and/or the like. The device 132 is adapted to process the modified compressed data stream of the processing station 112. In one embodiment, the device 132 may communicate an attribute of the user, such as an identification of the user and/or the Hike to the processing station 112. Subsequently, the processing station 112
may process the received compressed data stream in accordance the attribute of the User and thereby provide a conditional access to the user.
[0032] Figure 2 illustrates a block diagram of an example embodiment of the splicfcer 116 in accordance with one or more embodiments of the present invention. The splicer 116 performs a splicing operation on the received compressed data stream and replaces the compressed frames of the received compressed data stream with the one or more frames of the second compressed dat^ stream. The splicer 116 includes a de-multiplexer (Demux) 202, a plurality of splicer controller, such as, a splicer controller 2041p a splicer controller 204?, a
splicer controller 2043 , and a splicer controller 204m, hereinafter referred to
as the splicer controller 204 and a multiplexer 206,
[003(3] Generally, the de-multiplexer 202 receives the MPTS from the receiver 1141 The MPTS includes one or more SPTSs and each SPTS has a unique program identification number (PID). Additionally, each splicer controller 204 processes the one or more elementary streams associated with the unique PID. Consequently, the de-multiplexer 202 de-multiplexes the received MPTS to one or more SPTSs The splicer controller 204 performs the splicing operation on the SPTS and forwards the spliced SPTS to the multiplexer 206, The multiplexer 206 multiplexes the outputs of each splicer controller 204 and thereby, generates a processed MPTS. Accordingly, the multiplexer 206 forwards the processed MPTS to the transmitter 118.
[003J4] According to one or more embodiments of the invention, the splicer controller 204 includes a monitoring module 208, a splicing module 210, a flow control module 212, a synchronization module 214 and a timing module 216. The moirtitoring module 208 monitors the one or more received data packets of the SPtS and thereby detect the splice in-point and the splice out-point within the SPTS. In one embodiment, the monitoring module 208 may detect a signal, such as a CUE signal within the data packets and thereby, detect the splice in-point and the splice out-point within the compressed data stream.
[0038] On detection of the splice in-point and the splice out-point, the monitoring module 208, detects one or more splice opportunity points (SOPs) within the received SPTS. The SOP refers to the compressed frame of the compressed data stream (e.g. the SPTS), from where the splicer 116 may start the splicing operation on the compressed data stream. For example, SOP for the compressed video stream of the SPTS refers to start of the intra coded frame (I-frame). Additionally, the SOP for the one or more compressed audio stream of the SPTS refers to the start of compressed audio frame.
[003$] In one embodiment, the monitoring module 208 detects the one or more SOF?s by analyzing the status of a payload unit start indicator bit (PUSI) present within a header of the data packet of the compressed data stream. The monitoring module 208 communicates the splice in-point, the splice out-point and the one or more SOPs to the splicing module 210 and the splicing module 210 performs the splicing operation on the compressed data stream.
[0037] The splicing module 210 further includes a plurality of stream splicing
modules (SSMs), such as, a SSM 211i.a SSM 2112, a SSM. 211s , and a
SSM 211 „, hereinafter referred to as the SSM 211. The splicing module 210 may use: the SSM 211 for the one or more streams of the compressed data stream. For-example, for the video stream, the splicing module 210 defines a video SSM and for the audio stream, the splicing module 210 defines an audio SSM. Accordingly, the video SSM performs a splicing operation on the video stream of the compressed data stream. Additionally, the audio SSM performs a splicing operation on the audio stream of the compressed data stream. Further, the splicing module 210 defines a program clock reference (PCR) SSM and the PCR SSM performs a splicing operation on a PCR stream of the compressed data streiam.
[0038] The flow controls module 212 controls the flow of the data packets of the one or more streams of the compressed data stream and will be discussed later in the description. The synchronization module 214 provides the
synchronization among the audio and the video streams of the compressed data stream. The timing module 216 may alter the presentation time stamp (PTS) and/or the decoding time stamp (DTS) of the one or more compressed frames of the second compressed data stream and/or the received compressed data stream. The synchronization module 214 and the timing module 216 are described in the subsequent paragraphs.
[00303 The one or more embodiments of the splicer controller 204 and the modules, such as, the monitoring module 208, splicing module 210 and/or the like,; may be implemented in hardware, software, or a combination of hardware and! software,
[0040] Figures 3A-3E illustrate an example splicing operation on an example codfed video stream of the compressed data stream in accordance with one or morie embodiments of the present invention. Figure 3A illustrates a display sequence of the coded video frames of the compressed data stream A and each video frame of the compressed data stream A is coded as per the compression technologies defined in MPEG-2 standard. Additionally, the corresponding nuntibers of the frames represent the display order of the frames of the compressed data stream A. As illustrated, the frame 0 is coded as the l-frame and the frame 3 is coded as the P-frame. The frame 3 is dependent on the preceding frame 0 and requires frames 0, while decoding at the device 132. The frame 1 and frame 2 are dependent on the preceding frame 0 and the future frame 3.
[0041] Figure 3A further illustrates the Group of Picture (GOP), such as GOP 1, $OP 2 and/or the like for the video stream of the compressed data stream. The GOP generally starts with the l-frame and is followed by the one or more P-frarhes and the B-frames. For example, the GOP 1 includes frame 0 to frame11 and the GOP 2 includes frame 12 to frame 23 in the compressed single program stream. As an example and not as limitation, the size of the GOP for the video strefam of the compressed data stream is defined as twelve and those who are
skilled in the art may use variable size for the GOP for the video stream of the compressed data stream,
[0042] Figure 3B illustrates the transmitted sequential view of the coded video stream of the compressed data stream A. As per MPEG-2 standards, generally, during transmission of the coded video stream, the I frame of the GOP is transmitted and is followed by the sequence as depicted in the Figure 3B. The processing station 112 receives the compressed data stream A and the splicer 116 may replace one or more frames of the compressed data stream A with the oneor more frame of the second compressed data stream B. As an example and not as limitation, when the GOP size of the compressed data stream A matches withithe GOP size of the second compressed data stream B, the splicer replaces oneor more GOPs of the compressed data stream A with one or more GOPs of the second compressed data stream B. Additionally and/or alternatively, when the GOP size among the compressed data stream A and the second compressed data stream B mismatches, the splicer 116 replaces one or more frames of the GOP of the compressed data stream A with one or more GOPs of the $econd compressed data stream B and with one or more filler frames.
I004JJ] Figure 3C illustrates the splice in-point, the splice out-point and one or more SOPs within the compressed data stream A, The video SSM of the splicing module 210 identifies the frame 4 as the splice in-point and the frame 19 as the splice out-point. The monitoring module 208 monitors the received compressed data stream A and further identifies the frame 12 as SOP for the splice in-point and; frame 24 as SOP for the splice out-point. Accordingly, the video SSM replaces the frames of the compressed data stream A with the one or more frames of the compressed data stream B.
[0044] Figure 3D illustrates the second compressed data stream B, Figure 3E illusitrates a modified stream generated by the processing station 112. Since the GOP size (GOP size =10) of the second compressed data stream does not match with the GOP size (GOP size=12) of the compressed data stream A, two
filler; frames F1 and F2 are additionally inserted within the compressed data stream A along with the GOP of the second compressed data stream B. The filler frames (e.g., F1 and F2) remove the mismatching of the GOPs sizes of the compressed data stream A and the second compressed data stream B. The filler frarrie may be an I type filler frame and/or a P-type filler frame,
[004«] In one embodiment, the P-type filler frame may be generated by making all the motion vectors and DCT coefficients within a P type compressed frame. Such type of compressed filler frame is predicted from the preceding frame without having motion compensation. Additionally and/or alternatively, a header of the P-type filler frame may not include the GOP header information and thereby reduces the number of bits for the P-type filler frame. Accordingly, the audio frame for the corresponding filler frame may be generated. Additionally and/or the alternatively, a coded audio frame having an output below 20 Hz on deciding, may be used as audio filler frame.
[004$j Referring again to Figure 3C, after the SOP for the splice out-point (frame 24), the frame 22 and the frame 23 are the frames of GOP 2. Since the splicer 116 does not decode the coded video frames of the compressed data stream A and the frame 22 and the frame 23 require decoding of the frames of the GOP 2, the splicer 116 further replaces such frames with the l-frame (frame 24). Thus by replacing such B-frames with the copy of the l-frame, the splicer gets rid of the complex processing operation, such as a decoding operation of frames, a frame conversion operation, re-ordering the frame numbers and/or the like. Such replacement of B-frames by the splicer 116 reduces memory processing time and thereby increases the efficiency of the processing station 112;
[0047] According to one or more embodiments of the invention, the monitoring module 208 monitors a header and status of the pay load unit start indicator (PUlSI) bit within the transport header of the one or more data packets.
Accordingly, the monitoring module 208 detects the splicing points within the one or more data packets of the compressed data stream.
[004$] As an example and not as a limitation, the splicer 116 may replace one or more data packets of the compressed data stream with one or more data packets of the second compressed data stream. Additionally, the splicer may rebuild one or more data packets of the compressed data stream.
[004$] In one embodiment, the splicer 116 made a determination as to whether the data packet includes the splicing points. If it is determined that the data packet includes the splicing points (e.g., splice in-point, splice out-point) and one or more coded frames of the compressed data stream, the splicer 116 processes the data packet. For example, if the data packet includes the splicing point, such as, splice in-point or splice out-point and one or more coded frames of the compressed data stream, the splicer 116 break the data packet into first data packet and second data packet. The first data packet includes data preceding the splicing point and the second data packet includes data succeeding the splicing
point.
[0050] Additionally, the splicer 116 replaces first data packet or second data packet with data packet of the second compressed data stream in accordance withi determined splicing point. For example, if the determined splicing point is a splice in-point, the splicer 116 replace second data packet with the data packet of the second compressed data stream. The rebuilding and/or processing of one or more data packets prevent re-packetization of the compressed data stream and thereby increase the efficiency of the system.
10051] In one embodiment, the time stamp values of the modified stream generated by the splicer 116 may be corrected to avoid a discontinuity within the modified compressed data stream.
[005g] Data Flow Control: As already discussed, the program stream includes the video stream, one or more audio streams and/or the corresponding data
stream. Once compressed, the program stream is transmitted across the network 124 as compressed data stream in data packets. Generally, it is desired to achieve a flow control among the streams of the compressed data stream and thereby, the various compression standards define a permissible lag time among the streams of the compressed data stream. For example, in the compression standard, such as MPEG-2, the audio stream may |ag up to 100 msec with respect to the video stream of the compressed data stream. As an example and not as limitation, the flow control among the video stream and the corresponding one or more audio streams is achieved using the presentation time stamp (PTS) values of the compressed frames of the compressed data stream.
[005*] Accordingly, the flow control module 212 controls the processing operation on the compressed data stream. For example, the flow control module 212 controls flow of data packets of the compressed data stream and thereby maintains ffie permissible Tag time among the streams of the compressed1 cfafa stream. Generally, during the splicing operation on the compressed data stream, the one or more stream splicing modules 211 (e.g., the video SSM, the audio SSM, PCR SSM and/or the like) of the splicer 116 may not communicate among themselves.
[0054] The flow control module 212 defines a leap period for the SSM 211 and the leap period may refer to a PTS duration covered by the SSM 211. In one embodiment, the flow control module 212 defines a Variable 'Avg_leap_size' and the variable 'Avgjeap_size' determines the leap period for the SSM 211. The flow control module 212 thus communicates the variable 'Avg_leap_size' with the splicing module 210. Subsequently, the splicing moyu|e 210 communicates the variable Avg_leap_size to the one or more SSM 211, such as the video SSM, the audio SSM and/or the like. The Avg_leap_size defines time duration for each SSM 211 for processing the data packets.
[005*1 Generally, during an initiation of a flow control process, once the leap period is communicated by the variable Avg_leap_size to the SSM 211, for a
given leap period, the SSM 211 of the splicer 116 may cover a range of the stream duration calculated by PTS at two points in the stream,. A most appropriate value of Avg_leap_size should be between the minimum and maximum of all AUB intervals of different streams. Large value of Avg_leap_size may incorporated large lead time. And small value may cause running splicer out of data most of the time. Within the range of Avg_leap_size, one or more frames of the video stream are covered by the video SSM of the splicer! 16. Additionally and/br alternatively, one or more frames of the audio stream are covered by the audio SSM of the splicer 116. At the end of the initial leap period, a leap_credit is giveh to the corresponding SSM of the splicer and the Leap_credit (at end) = lea peered it (at start) % AUB size. The end of leap is indicated by the splicer returning with a leap flag.
[0058] When the stream has smaller AUB than leap_size, in the leap period, the SSM of the splicer 116 concedes a leap credit which is the accumulated diffefence between Avg_leap_size and AUB interval. If the accumulated credit incr0ases the AUB interval itself, the SSM of the splicer 116 will continue one more AUB in the same leap. For example, if the stream has an AUB length of 24 msec, and average leap size is 40 msec, then in the leap period, the SSM of the splicer 116 concede 16 msec of credit, and after 2 leap periods, the SSM of the splicer 116 credit to leap one more AUB. Thus the SSM of the splicer 116 process second AUB at same time.
Pseudo Code
[0057] Set leap size to all splicer to Avg_leap_size;
[oosa] Run_splicer() till leap_flag is not set for all splicer;
[0069] If (leap flag)
[0060] leap_crd += avgjeap;
[0061U while (leap_crd >= AAUB) {
[006?]
Process data till next AUB
[0063] leap_crd-=AAUB;
[0064] }//End while
[0065] Leap_flag = 1;
[0066] Thus flow control among one or more streams of the compressed data stream is achieved.
AUDIO VIDEO SYNCHRONIZATION:
[006*] In view of foregoing discussion, during the Splicing operations, the time stamp values such as the PTS values of the frame ^nd/or the AUB are used to ensure a lip-synchronization among the video stream and one or more audio sfa&ms tf' ilte laa-npiTssmstf criste sftisam. As* #t? gtfSjrrrpfe acta' act as timtfatiaa, experiments shows that the difference in PTS value for simultaneously encoded data which are to be presented at same time instance should be +15 msec, lead timeifor audio and -45 msec lag time for the audio.
[0068] The frequency of SOP arrival depends on the type of the stream. For example, during suitable splice transaction, SOP occurs more at every AUB and/br the audio frame of the audio stream. However for the video stream, the SOP can be identified at the start of the GOP frame, which is at the start of the I-frame. Since the frequency of the arrival of the auqj0 frame is more than the arrival of the I frame with in the video stream, the audio SSM may find the SOP early than the video SSM 211.
[0069] According to one or more embodiments of the invention, AUB instances of the audio stream are referred to as AUBA(i) and the video AUB instances are denoted as AUBv(i). Accordingly, the audio SOPs are denoted as SOPAG) and the video SOPs are denoted as SOPv(j)- Generally, the §0P occurs at the arrival of the AUB instance. For example, the SOP for the vides stream is the arrival of the l-frarine and the SOP for the audio stream is the arrival 0f the audio frame.
[0070] An incorrect selection of the SOP in the video stream and the audio stream may disturb the synchronization of both streams. To synchronize the audio stream and video stream of the compressed data stream, a common SOP amdng the video stream and one or more audio streams of the compressed data stream is searched. An anchor stream among the one or more streams of the compressed data stream is selected. The anchor stream (e.g., the video stream of the compressed data stream) has the least frequency of arrival of SOPs (or maxjimum SOP distance) among the one or more streams of the compressed data stream and thus has a large SOP time period. The selection of SOP for other stream of the compressed data stream depends on the selection of the SOP of the anchor stream. Additionally, a second variable is defined and the second variable determines the difference of the PTS duration of the SOP of the anchor stream and SOP of the other stream. For example, if the video stream is selected as the anchor stream and the audio stream is selected as other stream, the second variable is defined as eij, and the eij = |SOPtf (i) - SOPA(j)|. SOPv (i) is PTS of ilh SOP of video and SOPA(j) is the PTS of the jth SOP of audio.
(0071] To provide lip-synchronization among the streams of the compressed data stream, the synchronization module 214 selects that SOP of the other streams, where the second variable achieves a minimum value with reference to the selected SOP of the anchor stream.
[0072] Generally, the synchronization module 214 may execute an iterative method and/or the predictive method for evaluating the minimum value of the eij. In one embodiment, the synchronization module 214 may use the combination of the iterative method and the predictive method for evaluating the minimum value of the eij.
[0073] Iterative Method: The iterative method, being used by the synchronization module 214, includes the following steps:
[0074] Step 1: Process the streami til arrival of the AUB. Determine the existence of the SOP. If the SOP for the stream exists, define its as SOP1(1). The SOP1(1) is a nearest SOP for streami.
[0075] Step 2: Similarly, perform the step 1 for a Stream2, and determine SOP2(1) and thus the splicer controller 204 achieves one AUB at once in both stream.
[0076] Step 3: Each time the splicer controller 204 get the SOP in any of stream, the splicer controller 204 compares with all possible SOP combination of both streams using eij = |SOP1(i) - SOP2(j)|
[00773 Step 4: If eij <= | (SOP_smallest)/2| select that combination of i and j of streami and stream2. SOP_smallest is smallest SOP duration in all streams.
[0078] Step 5: Process next AVB of both streams t))) the satisfaction of condition in step 4. This whole process is called scanning of SOP.
[0079] Predictive Method: In one embodiment, the synchronization module 214 uses the predictive method for achieving synchronization among one or more streams of the compressed data stream. The predictive method includes the following steps:
[0080] The predictive method predicts SOP of a stream having largest SOP time period. If the stream having largest SOP time period is an audio stream of the compressed data stream, then PTS_SOP = PTS_current + AUB_duration (always available at next AUB). If the stream having largest SOP time period is a Video stream, then PTS_SOP = PTS_current + (GOP_Size - Temporal_ref)* AUB_duration. The calculated SOP may be referred to as an anchor SOP. A SOP of other stream is calculated as, PTS_other = round [(SOP_anchor -PTS-current)/SOP_duration]*SOP_duration.
[0081] Iteration + Prediction: In one embodiment, the synchronization module 214 executes the combination of the iterative and the prediction method
to achieve the lip synchronization among the one or more streams of the compressed data stream. Such combinations may be realized using the steps:
a. Sort out one or more streams in order of large SOP time period to small SOP
time period. In one embodiment, the streams may be given a sorting number (i),
wherein the sorting number depends on the SOP time period of the stream.
b. Predict the possible SOP location in all streams including the anchor stream.
c. A determination is made as to whether the PTS_SOP of the Stream (i)
exceeds the PTS_SOP of the anchor stream. If the PTS_SOP of the Stream (i) is
greater than the PTS_SOP of the anchor stream, the corresponding SSM for the
stream (i) stops splicing operation on the stream (i).
d. If the PTS_SOP of the stream (i) is not greater than the PTS_SOP of the
anchor stream, the corresponding stream splicer starts splicing one AUB and/or
the frame of the stream.
e. On identifying a SOP in the anchor stream, the following steps may be
executed, do following:
i. Predict multiple SOP of other streams nearer to SOP of largest stream
ii. Select one SOP out of the multiple SOPs for the stream (i), and the
selected SOP for the stream (i) are closest to the SOP of the anchor stream.
iii. Convert this duration into number of AUBs.
iv. Start AD-feed in largest SOP stream
v. Start normal flow control, and intimation of AUB and SOP at every
arrival
vi. As soon as get the SOP at the probable location in smaller SOP
streams start AD-feed.
PCft splicing and PTS/DTS correction;
[0082] According to one embodiment, the splicer controller 204 includes the timing module 216 for processing one or more timing values such as, presentation time stamp (PTS), decoding time stamp (DTS), program clock reference (PCR) and/or the like. For example, the splicer 116 processes one or more PCR packets of the received compressed data stream. The PCR packets include the PCR values and the PCR values are obtained using a 27 MHz clock at the source station 102. Generally, during the splicing operations, the splicer controller 204 modifies the one or more PCR packets of the second compressed data stream in accordance with the received compressed data stream. Additionally, the splicer controller 204 may generate one or more PCR packets for the one or more filler frames and thereby provides the corresponding timing values in accordance with the received compressed data stream.
[008J] According to one or more embodiments of the present invention, the timing module 216 modifies the PTS and/or the DTS values of the second compressed data stream and thereby, the second compressed data stream achieves the same time base as that of the received compressed data stream. For example, the timing module 216 modifies the PTS and/or the DTS values of the video stream of the second compressed data stream in accordance with the PTS and/or the DTS values of the video stream of the received compressed data stream. Accordingly, the PTS and/or the DTS modifications for the compressed fi-ames are acrnevetf using the SSM 211, sc/cn as, trie Wrfeo SSM 211, Are aircffo SSM 211.
[0084] Generally, the splicer controller 204 modifies the timing values by applying a linear transformation on the timing values of the compressed data stream. As an example and not as limitation, If A is a vector representation of the PTS values in the received compressed data stream, B is a vector representation of the DTS values of the compressed data stream, and S is a vector of constant shift value whose aj| elements are equal, the linear transformation A' = A + S
represent a vector having modified PTS values and the linear transformation B' = B +;S provides the modified DTS values of the compressed data stream. The linear transformations thus applied on the compressed data stream do not change clock reference characteristics. Additionally, to achieve smooth playing of the compressed data stream, the splicer 116 modifies the PCR values of the compressed data stream. For example, if P represents as a vector of the PCR values of the compressed data stream, the linear transformation P' = P + S represents a vector having modified PCR values.
[0086] During the splicing operation, the second compressed data stream replaces partly or wholly the received compressed data stream, and thus the splitter 116 modifies the timing values of the second compressed data stream in accordance with the received compressed data stream. For example, for a splicing interval, the splicer 116 modifies the timing values, such as, the PTS andfar the DTS values, PCR vafues of the second compressed data stream in accordance with the received compressed data stream.
[0086] Accordingly, the splicer 116 calculates the timing values of the received compressed data stream. For example, the splicer 116 calculates timing references, such as, time stamp for a base AUB and/or the like of the received compressed data stream. Additionally, the splicer 116 modifies timing references of the second compressed data stream using calculated timing references of the received compressed data stream. Furthermore, the splicer 116 may modify the time stamps in accordance with a modified display order of the compressed frames of the spliced compressed data stream,
[008|7] According to the various embodiment of the invention, the splicer 116 may support a time-base discontinuity at the splice-in point. Accordingly, the second compressed data stream may start with a new time-base values. For example, the PTS values, the DTS values and the PCR values of the second compressed data stream may have the new time-base. The new time base values may be indicated by setting the discontinuity indicator of the first packet of
the! second compressed data stream having new time base. Accordingly, at the splice out-point the discontinuity indicator is set in the first packet of the received compressed data stream and thereby signaling a new-time base.
[00*8] Thus the PTS values, DTS values and/or the PCR values are modified.
[00»9] Further, various embodiments discussed herein enable a method of doing business, wherein advertisements may inserted into a broadcast stream such that the inserted advertisements are shown in relatively smaller geographical area, e.g. an area pertaining to a cable head end. The inserted advertisements may be different than the broadcasted advertisements. This has various advantages for advertisers, broadcasters and the viewers, and such a method of doing business is disclosed in provisional patent application "Method and system for broadcasting multimedia data" being co-filed with this application. .According to the various embodiments as discussed, the splicer and provides several advantages to such a business method, especially in allowing advertisers and/or broadcasters to insert advertisements streams acquired, for example, at a local cable head end instead of the broadcasted stream.
[0080] While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof.
| # | Name | Date |
|---|---|---|
| 1 | 1120-MUM-2009- AFR.pdf | 2022-08-17 |
| 1 | Form 3 [21-07-2016(online)].pdf | 2016-07-21 |
| 2 | 1120-MUM-2009-AbandonedLetter.pdf | 2019-01-29 |
| 2 | Form 3 [12-01-2017(online)].pdf | 2017-01-12 |
| 3 | abstract1.jpg | 2018-08-10 |
| 3 | 1120-MUM-2009-ABSTRACT(27-4-2010).pdf | 2018-08-10 |
| 4 | 1120-MUM-2009-GENERAL POWER OF ATTORNEY(1-11-2010).pdf | 2018-08-10 |
| 4 | 1120-MUM-2009-CLAIMS(27-4-2010).pdf | 2018-08-10 |
| 5 | 1120-MUM-2009-FORM 5(27-4-2010).pdf | 2018-08-10 |
| 5 | 1120-MUM-2009-CORRESPONDENCE(1-11-2010).pdf | 2018-08-10 |
| 6 | 1120-mum-2009-form 3.pdf | 2018-08-10 |
| 6 | 1120-MUM-2009-CORRESPONDENCE(10-1-2014).pdf | 2018-08-10 |
| 7 | 1120-MUM-2009-FORM 3(27-4-2010).pdf | 2018-08-10 |
| 7 | 1120-MUM-2009-CORRESPONDENCE(2-12-2011).pdf | 2018-08-10 |
| 8 | 1120-MUM-2009-FORM 3(10-1-2014).pdf | 2018-08-10 |
| 8 | 1120-MUM-2009-CORRESPONDENCE(27-4-2010).pdf | 2018-08-10 |
| 9 | 1120-mum-2009-correspondence.pdf | 2018-08-10 |
| 9 | 1120-MUM-2009-FORM 3(1-11-2010).pdf | 2018-08-10 |
| 10 | 1120-MUM-2009-DESCRIPTION(COMPLETE)-(27-4-2010).pdf | 2018-08-10 |
| 10 | 1120-mum-2009-form 26.pdf | 2018-08-10 |
| 11 | 1120-mum-2009-form 2.pdf | 2018-08-10 |
| 12 | 1120-mum-2009-description(provisional).pdf | 2018-08-10 |
| 13 | 1120-MUM-2009-DRAWING(27-4-2010).pdf | 2018-08-10 |
| 13 | 1120-mum-2009-form 2(title page).pdf | 2018-08-10 |
| 14 | 1120-mum-2009-drawing.pdf | 2018-08-10 |
| 14 | 1120-MUM-2009-FORM 2(TITLE PAGE)-(27-4-2010).pdf | 2018-08-10 |
| 15 | 1120-MUM-2009-FER.pdf | 2018-08-10 |
| 15 | 1120-mum-2009-form 2(27-4-2010).pdf | 2018-08-10 |
| 16 | 1120-MUM-2009-FORM 1(27-4-2010).pdf | 2018-08-10 |
| 16 | 1120-MUM-2009-FORM 18(2-12-2011).pdf | 2018-08-10 |
| 17 | 1120-mum-2009-form 1.pdf | 2018-08-10 |
| 18 | 1120-MUM-2009-FORM 1(27-4-2010).pdf | 2018-08-10 |
| 18 | 1120-MUM-2009-FORM 18(2-12-2011).pdf | 2018-08-10 |
| 19 | 1120-mum-2009-form 2(27-4-2010).pdf | 2018-08-10 |
| 19 | 1120-MUM-2009-FER.pdf | 2018-08-10 |
| 20 | 1120-mum-2009-drawing.pdf | 2018-08-10 |
| 20 | 1120-MUM-2009-FORM 2(TITLE PAGE)-(27-4-2010).pdf | 2018-08-10 |
| 21 | 1120-MUM-2009-DRAWING(27-4-2010).pdf | 2018-08-10 |
| 21 | 1120-mum-2009-form 2(title page).pdf | 2018-08-10 |
| 22 | 1120-mum-2009-description(provisional).pdf | 2018-08-10 |
| 23 | 1120-mum-2009-form 2.pdf | 2018-08-10 |
| 24 | 1120-MUM-2009-DESCRIPTION(COMPLETE)-(27-4-2010).pdf | 2018-08-10 |
| 24 | 1120-mum-2009-form 26.pdf | 2018-08-10 |
| 25 | 1120-MUM-2009-FORM 3(1-11-2010).pdf | 2018-08-10 |
| 25 | 1120-mum-2009-correspondence.pdf | 2018-08-10 |
| 26 | 1120-MUM-2009-CORRESPONDENCE(27-4-2010).pdf | 2018-08-10 |
| 26 | 1120-MUM-2009-FORM 3(10-1-2014).pdf | 2018-08-10 |
| 27 | 1120-MUM-2009-FORM 3(27-4-2010).pdf | 2018-08-10 |
| 27 | 1120-MUM-2009-CORRESPONDENCE(2-12-2011).pdf | 2018-08-10 |
| 28 | 1120-mum-2009-form 3.pdf | 2018-08-10 |
| 28 | 1120-MUM-2009-CORRESPONDENCE(10-1-2014).pdf | 2018-08-10 |
| 29 | 1120-MUM-2009-FORM 5(27-4-2010).pdf | 2018-08-10 |
| 29 | 1120-MUM-2009-CORRESPONDENCE(1-11-2010).pdf | 2018-08-10 |
| 30 | 1120-MUM-2009-GENERAL POWER OF ATTORNEY(1-11-2010).pdf | 2018-08-10 |
| 30 | 1120-MUM-2009-CLAIMS(27-4-2010).pdf | 2018-08-10 |
| 31 | abstract1.jpg | 2018-08-10 |
| 31 | 1120-MUM-2009-ABSTRACT(27-4-2010).pdf | 2018-08-10 |
| 32 | 1120-MUM-2009-AbandonedLetter.pdf | 2019-01-29 |
| 32 | Form 3 [12-01-2017(online)].pdf | 2017-01-12 |
| 33 | 1120-MUM-2009- AFR.pdf | 2022-08-17 |
| 33 | Form 3 [21-07-2016(online)].pdf | 2016-07-21 |
| 1 | search1120_19-06-2018.pdf |