Sign In to Follow Application
View All Documents & Correspondence

"Innovation In Coding And Decoding Macroblock And Motion Information For Interlaced And Progressive Video"

Abstract: In one aspect, an encoder/decoder receives information (1905) for four field motion vectors for a macroblock in an interlaced frame-coded, forward-predicted picture and processes the macroblock using the four field motion vectors. In another aspect, a decoder decodes skipped macroblocks of an interlaced frame. Skipped macroblocks use exactly one motion vector and have no motion vector differential information, and lack residual information. The skipped macroblock signal indicates one-motion-vector coding. In another aspect a decoder receives luma motion vector information for plural luma motion vectors for a macroblock and derives a chroma motion vector for each luma motion vector by performing at least one calculation on the luma motion vector information, maintaining a 1:1 ratio of chroma motion vectors to luma motion vectors for the macroblock. For example, the decoder receives four luma field motion vectors for a macroblock and derives four chroma motion vectors for the macroblock

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
06 November 2009
Publication Number
26/2010
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2021-01-29
Renewal Date

Applicants

MICROSOFT CORPORATION
ONE MICROSOFT WAY, REDMOND, WASHINGTON 98052-6399, US

Inventors

1. HOLCOMB, THOMAS
C/O MICROSOFT CORPORATION, INTERNATIONAL PATENTS, ONE MICROSOFT WAY, REDMOND, WASHINGTON 98052-6399, US
2. HSU, POHSIANG
C/O MICROSOFT CORPORATION, INTERNATIONAL PATENTS, ONE MICROSOFT WAY, REDMOND, WASHINGTON 98052-6399, US
3. LIN, CHIH-LUNG
C/O MICROSOFT CORPORATION, INTERNATIONAL PATENTS, ONE MICROSOFT WAY, REDMOND, WASHINGTON 98052-6399, US
4. SRINIVASAN, SRIDHAR
C/O MICROSOFT CORPORATION, INTERNATIONAL PATENTS, ONE MICROSOFT WAY, REDMOND, WASHINGTON 98052-6399, US

Specification

NEW DIVISIONAL OF INDIAN PATENT APPLICATION NO. 594/DELNP/2006 COPYRIGHT AUTHORIZATION A portion of the disclosure of thto patent ckxamient cental nenwWchte subject to copyright protection. The cspyrigjit corner has iw objection one of the patent disclosure, as R appears In the Patent aiKlTraderrtarkC patent fles or records, but otherwise reserves a) oopyrigrigrte whatsoever. TECHNICAL FIELD Techniques and tools tor progressive and kiteriaoed video codtog and dscodtig are described. Fexainpte, an encoder signals macroblo an irtf effaced frame coded picture. AsertcherexaranerKsaWdecoacwJtesand decodes luma and chroma motion vectors to m irttorteosd frame coded picture. BACKGROUND Digjtal video Msumeleigeam)unteAtypical taw dgtaj video sequence Includes 15 or 30 pictures per second. Each picture can include tens or huncVedsc)fthcB4 A decoder also can accommodate space and lime restridions by opting not to decode or dfeptery B-frames, since B-frames are not generally used as relerence frames. While macroblocks in forward-predicted frames (e.g., P-frames) have only one directional mode of precBctJon (forward, from previous I-or P-frarnes), rra in B-frames can be predicted using ffve different prediction modes: forward, backward, direct, interpolated and intra. The encoder nolocto and signals different prediction modes in the bit stream. Forward mode is similar to a>rrvention8J P-frame prediction. In forward mode, a macroblock Is derived from a temporally previous anchor. In backward mode, a macroblock is derived from a temporally subsequent anchor. Macroblocks predicted In direct or interpolated modes use both forward and backward anchors for prediction. V. Signaling Macroblock Information in a Previous WMV Encoder and Decoder In the encoder and decoder, macroblocks in interlaced P-frames can be one of three possible types: frame-coded, field-coded and skipped. The macroblock type is indicated by a multi-element combination of frame-Jevel and macroblock-level syntax elements. For interlaced P-frames, the frame-level element INTRLCF indicates the mode used to code the macroblocks in that frame. If INTRLCF = 0, all macroblocks in the frame are frame- coded. tflNrTRLCF = 1,tr»rriatfoblocJarnaybThelNTRLCMB element is present at in the frame layer when INTRLCF = 1. INTRLCMB is a bitplane-coded array that indicates the field/frame cooing status for each macroblock in the picture. The decoded bitplane represents the interlaced status for each macroblock as an array of 1-bit values. A value of 0 for a particular bit indicates that a corresponding macroblock is coded in frame mode. A value of 11ndicates that the corresponding macroblock is coded in field mode. For frame-coded macroblocks, the macroblock-tevel MVDATA element is associated vvithaB blocks In the nacroblock. MVDATA signals whether the blocks in the macroblocks are intra-coded or inter-coded. If they are inter-coded, MVDATA also indicates the motion vector differential. For field-coded macroblocks, a TOPMVDATA element is associated with the top field blocks in the macroblock and a BOTMVDATA element is associated with the bottom field blocks in the macroblock. TOPMVDATA and BOTMVDATA are sent at the first block of each field. TOPMVDATA indicates whether the top field blocks are intra-coded or inter-coded. Likewise, BOTMVDATA indicates whether the bottom field blocks are intra-coded or inter-coded. For inter-coded blocks, TOPMVDATA and BOTMVDATA also indicate motion vector deferential information. The CBPCY element indicates coded block pattern (CBP) information for luminance and chrominance components in a macroblock. The CBPCY element also indicates which fields have motion vector data elements present in the bttstream. CBPCY and the motion vector data elements are used to specify whether blocks have AC coefficients. CBPCY Is present for a frame-coded macroblock of an interlaced P-#arne if the lasf value decoded from MVDATA indicates that there are data foltowing the motion vector to decode. If CBPCY is present, it decodes to a 6-bit field, one bit for each of the four Y blocks, one bit for both U blocks (top field and bottom field), and one bit for bom V blocks (top field and bottom field). CBPCY is always present for a fieldKxxied rnacrobtock. CBPCY and the two field motion vector data elements are used to determine the presence of AC coefficients in the blocks of the macroblock. The meaning ofCBPCY is the same as for frarrie-coded macroblocks for bits 1,3,4 and 5. That is, they indicate the presence or absence of AC coefficients in the right top field Y block, right bottom field Y block, top/bottom U blocks, and top/bottom V blocks, respectively. For bit positions 0 and 2, the meaning is slightly different A 0 in bit position 0 indicates that TOPMVDATA is not present and the motion vector predictor Is used as the motion vector for the top field blocks. It also indicates that the left top field block does not contain any nonzero coefficients. A1 in bit position 0 indicates that TOPMVDATA is present TOPMVDATA indicates whether the top field blocks are Inter or Intra and, If they are Inter, also indicates the motion vector differential. If the "last" value decoded from TOPMVDATA decodes to 1, then no AC coefficients are present for the left top field block, otherwise, there are nonzero AC coefficients for the left top field block. SimBarly, the above rules apply to bit position 2 for BOTMVDATA and the left bottom field block. VI. Skipped Macrobtocks in a Previous WMV Encoder and Decoder The encoder and decoder use skipped macroblocks to reduce bitrate. For example, the encoder skjrrais skipped rnacrobkxs in tr»bttstream. When the decoder receives information (e.g., a skipped macrobiock flag) in the bitstream indicating mat a macroblock is skipped, the decoder skips decoding residual block information for the macroblock. Instead, the decoder uses corresponding pixel data from a co-located or motion compensated (with a motion vector predictor) macroblock in a reference frame to reconstruct the macroblock. The encoder and decoder select between multiple coding/decoding modes for encoding and decoding the skipped macrobiock information. For example, skipped macroblock Information is signaled at frame level of the bitstream (e.g., in a compressed bitplane) or at macroblock level (e.g., with one "skip" bit per macroblock). For bitplane coding, the encoder and decoder select between dffferent bitplane coding modes. One previous encoder and decoder define a skipped macroblock as a predicted macroblock whose motion is equal to Its causally predicted motion and which has zero residual error. Another previous encoder and decoder define a skipped macroblock as a predicted macroblock with zero motion and zero residual error. For more Information on skipped macroblocks and bitplane coding, see U.S. Patent Application No. 10/321,415, entitled "Skip Macroblock Coding," filed December 16,2002. VII. Chroma Motion Vectors In a Previous WMV Encoder and Decoder Chroma motion vector derivation is an Irnporiarit aspect cf video codrg ami decoding. Accordingly, software for a previous WMV encoder and decoder uses rounding and sub-sampling to derive chrominance (or "chroma") motion vectors from luminance (or "luma") motion vectors. A. Luma Motion Vector Reconstruction A previous WMV encoder and decoder reconstruct motion vectors for 1MV and 4MV macroblocks in progressive frames, and frame-coded or field-coded macroblocks in Interlaced frames. A luma motion vector is reconstructed by adding a motion vector differential to a motion vector predictor. In 1MV macroblocks in progressive frames and In frame-coded macroblocks in interlaced frames, a single luma motion vector applies to the four blocks that make up the luma component of the macroblock. In 4MV macroblocks In progressive frames, each of the inter-coded luma blocks in a macroblock has its own luma motion vector. Therefore, there are up to four luma motion vectors for each 4MV macroblock, depending on the number of inter-coded blocks in the macroblock. In field-coded macroblocks in interlaced frames, there are two luma motion vectors - one for each field. B. Derivation and Reconstruction of Chroma Motion Vectors The encoder and decoder use a 4:2:0 macroblock format for progressive frames. The frame data includes a luma ("Y") component and chroma components fll" and "V"). Each macroblock including four 8x8 luminance blocks (at times treated as one 16x16 macroblock) ami two 8x8 chrominance blocks. Figure 16 shows a 4:2:0 YUVsampHng grid. The 4:2:0 YUV sampling grid of Figure 16 shows a spatial relationship between kima and chroma samples of a macroblock. The resolution of the chroma samples is half the resolution of the luma samples in both horizontal (x) and vertical (y) Directions. Thus, in order to derive distances on the chroma grid from corresponding distances on the luma grid, a previous WrWemxxier and decoder divides luma motion vector components by 2. This is trie basis for a down-sampling step in deriving chroma motion vectors from luma motion vectors in progressive frames. The encoder and decoder, chroma vectors in progressive frames are reconstructed in two steps. First, the nominal chroma motion vector is obtained by combining and scaling the luma motion vectors appropriately. Second, rounding is optionally performed after scaling to reduce decoding time. For example, in a 1MV macroblock, chroma motion vector components (cmv_x and cmv_y) are derived from luma motion vector components (lmv_x and kny_y) by scaling the luma components according to the following pseudo code: // sJfodTbffjO] = 0,8_RndTblI1] ■ 0, s_RndTbl[2J = 0, s_RndTb?3] = 1 cmv_x = (lmy_x + s_RndTbiI1myjc & 3])»1 cmv_y = (lmv_y + s_RrKfTblI1my_y & 3])»1 Scaling is performed with a rounding table array (s_RndTbl[ ]) such that half-pixel offsets are preferred over quarter-pixel offsets. Additional rounding can be performed after the scaling operation to reduce decoding time. in a 4MV macroblock, the encoder and decoder derive a chroma motion vector from the motion information in the four luma blocks according to tiie pseudocode 1700 of Figure 17. As shown in Figure 17, the encoder and decoder derive chroma motion vector components using median prediction or averaging of kima motion vectors of inter-coded blocks in the macroblock. In the special case where three or more of the blocks are tntra-coded, the chroma blocks are also intra-coded. The encoder and decoder perform additional rounding If the sequence-level bit FASTUVMC = 1. The rounding signaled by FASTUVMC favors half-pixel and integer pixel positions for chroma motion vectors, which can speed up encoding. A previous WMV encoder and decoder use a 4:1:1 macrobtock format for Interfaced frames. For interlaced frames, the frame data also includes a luma ("V) component and chroma components ("U" and V). However, in a 4:1:1 macrobtock format, the resofution of the chroma samples is one-quarter the resolution of the luma samples in the horizontal direction, and fuH resolution in the vertical direction. Thus, in order to derive distances on the chroma grid from corresponding distances on the luma grid, a previous WMV encoder and decoder divides horizontal luma motion vector components by 4. For a frame-coded macroblock, one chroma motion vector corresponding to the single luminance motion vector is derived. For a field-coded macroblock, two chrominance motion vectors are derived corresponding to the two luminance motion vectors - one for the top field and one for the bottom field. In a previous WMV encoder and decoder, the rules for deriving chroma motion vectors in interlaced frames are the same for both field-coded macroblocks and frame-coded rnacrobtocks. The x-component of the chrominance motion vector is scaled (down-sampled) by a factor of four, whle the y-component of the chrominance motion vector remains the same as the luminance motion vector. The scaled x-component of the chrorrirance motion vector is also rounded to a neighboring quarter pixel location. Chroma motion vectors are reconstructed according to the pseudo-code below, in which cmv_x and crmry denote the chroma motion vector components, and lmv_x and tmv_y denote the corresponding luminance motion vector components. frac_x4 = (Imvjc« 2) % 16; int_x4= (lmv_x « 2) - frac_x; ChromaMvRound [16] = {0,0.0, .25, .25, .25, .5, .5, .5, .5, .5, .75. .75, .75.1,1}; cmv_y = lmv_y; cmyjt = Sign (Imvjc) * 0nt_x4 » 2) + ChromaMvRound [frac_x4]; VIH. Standards for Video Compression and Decompression Several international standards relate to video compression and decompression. These standards include the Motion Picture Experts Group ["MPEG11,2, and 4 standards and the H.261, H.262 (another trtie for MPEG 2), H.263 and H.264 (also called JVT/AVC) standards from the International Telecommunication Union PTU"]- These standards specify aspects of video decoders and formats for compressed video information. Directly or by implication, they also specify certain encoder details, but other encoder details are not specified. These standards use (or support the use of) different combinations of intraframe and interframe decompression and compression. A. Motion Estimation/Compensation in the Standards One of the primary methods used to achieve data compression of digital video sequences in the international standards is to reduce the temporal redundancy between pictures. These popular compression schemes (MPEG-1, MPEG-2, MPEG-4, H.261, H.263, etc.) use motion estimation and compensation. For example, a current frame is divided into uniform square regions (e.g., blocks and/or macroblocks). A matching region for each current region is specified by sending motion vector information for the region. The motion vector indicates the location of the region in a previously coded (and reconstructed) reference frame that is to be used as a predictor for the current region. A pixel-by-pixel difference, called the error signal, between the current region and the region to the reference frame is derived. This error signal usually has tower entropy than the original signal. Therefore, the information can be encoded at a tower rate. As iriWMV 8 and 9 encoders and decoders, since a motion vector value is often correlated with spatially surrounding rrtotionvedors, compression of the data used to represent the motion vector information can be axteved by axling the differential between the current motion vector and a motion vector predictor that is based upon previously coded, neighboring motion vectors. Some international standards describe motion estimation and compensation in interlaced video frames. For example. Section 7.6.1 of the MPEG-2 standard describes a dual-prime encoding mode. lndualmeerKX)dingontyonenK»aonvedortserKXxted(initsfull format) in the bitstream together with a differential motion vector. In the case of field pictures, two motion vectors are derived from this information. The two derived motion vectors are used to form predictions from two reference fields (one top field and one bottom field) which are averaged to form the final prediction. In the case of frame pictures, this process Is repeated for the two fields so that a total of four field predictions are made. The May 28,1998 committee draft of the MPEG-4 standard describes motion compensation for interlaced and progressive video. Section 7.5.5 describes motion compensation in progressive video object planes (VOPs). Candidate motion vectors are collected from neighboring macroblocks or blocks. Candidate motion vectors from outside the VOP are treated as not valid. If only one candidate motion vector is not valid, it is set to 0. if only two are not vaBd, they are set to equal the third motion vector. If all three are not valid, they are set to 0. Median filtering is performed on the three candidates to calculate a predictor. Section 7.6.2 of the committee draft of the MPEG-4 standard describes motion compensation for interlaced video. For example, Section 7.6.2.1 describes field-predicted macroblocks with one motion vector for each field, and frame-predicted macroblocks with either one motion vector per block or one motion vector per macroblock. Candidate motion vectors are collected from neighboring macroblocks or blocks, with the predictor selected by median filtering. Section 8.4 of draft standard also describes motion compensation. The components of a motion vector of a current block are predicted using median prediction. (Predlcfion does not take place across boundaries of macroblocks that do not belong to the same sice.) First, the motion vector values and reference pictures of three candidate neighboring blocks are determined. If the top right block is outside the current picture or slice or not avalable due to decoding order, the motion vector and reference picture of the top right block are considered equal to those of the top left block. If the top, top right, and top left blocks are ail outside the current picture or slice, their rrwtfon vectors and reference pictures are considered equal to those of the left block. In other cases, the rnotion vector value for a candidate predictor block that is biba-coded or outskte the currerrt picture or slice is considered to be 0, and the reference picture is considered to be different than the current block. Once the motion vector values and reference pictures of the candidate predictors have been determined, if only one of the left, top, and top right blocks has the same reference picture as the current block, the prodded motion vector for the current block is equal to the motion vector value of the block with the same reference picture. Otherwise, each component of the predicted motion vector value for the current block is calculated as the median of the corresponding candidate motion vector component values of the left, top and top right blocks. Section 8.4 also describes macroblock-adaptive frame/field coding. In interlaced frames, macroblocks are grouped Into macrobiock pairs (top and bottom). Macroblock pairs can be field-coded or frame-coded. In a frame-coded macroblock pair, the macrobiock pair is decoded as two frame-coded macroblocks. In a field-coded macroblock pair, the top macroblock consists of the top-field lines in the macrobiock pair, and the bottom macroblock consists of the bottom-field fries In the macroblock pair. If the current block Is in frame coding mode, the candidate motion vectors of the neighboring blocks are also frame-based. If the current block is in field-coding mode, the candidate motion vectors of the neighboring blocks are also field-based, in the same field parity. B. Signaling Field- or Frame-coded Macroblocks in the Standards Some international standards describe signaling of field/frame coding type (e.g., field-coding or frame-coding) for macroblocks in interlaced pictures. Draft JVT-d157 of the JVT/AVC standard describes the mb_ftekJ_decoding_flag syntax element, which is used to signal whether a macroblock pair is decoded in frame mode or field mode in interlaced P-frames. Section 7.3.4 describes a bitstream syntax where mb_field_decoding_flag is sent as an element of slice data in cases where a sequence parameter (mbJYamejfkjadaptiveJlag) indicates switching between frame and field decoding in macroblocks and a slice header element (pic_structure) identifies the picture structure as a progressive picture or an interlaced frame picture. The May 28,1998 committee draft of MPEG-4 describes the dctjype syntax element, which is used to signal whether a macroblock is frame DCT coded or field DCT coded. According to Sections 6.Z7.3 and 6.3.7.3, dctjype is a macrobtock-iayer element that is only present in the MPEG-4 bHstream in interfaced content where the macroblock has a non-zero coded block pattern or is intra-coded. In MPEG-2, the dctjype element indicates whether a macroblock is frame DCT coded ' or field DCT coded. MPEG-2 also describes a picture coding extension element frame_pred JramejdcL When frame_predjirame_dct is set to T, only frame DCT coding Is used in interlaced frames. The condition dctjype = 0 is "derived" when frame_pred Jramejdct = 1 and the dctjype element is not present in the bHstream. C. Skipped Macroblocks in the Standards Some international standards use skipped macroblocks. For example, draft JVT-d157 of the JVT/AVC standard defines a skipped macrcfclc<*as"amacroWoc*forwhichrK>(Jatais coded other man an indication that the macroblock is to be decoded as Skipped."' Similariy, the committee draft of MPEG-4 states, "A skipped macroblock is one for which no information is transmitted." D. Chroma Motion Vectors in the Standards One of the primary methods used to achieve data compression of digital video sequences in the international standards is to reduce the temporal redundancy between pictures. These popular compression schemes (MPEG-1, MPEG-2, MPEG-4, H.261, H.263, etc) use motion estimation and compensation. For example, a current frame is divided into uniform square regions (e.g., blocks and/or macroblocks) of kirna and chroma information. A matching region for each current region is specified by sending motion vector information for the region. For example, a luma motion vector indicates the location of the region of luma samples in a previously coded (and reconstructed) frame that is to be used as a predictor for the current region. A pixel-by-pixel difference, called the error signal, between the current region and the region in the reference frame is derived. This error signal usually has lower entropy than the original signal. Therefore, the information can be encoded at a lower rate. As in previous WMV encoders and decoders, since a motion vector value is often correlated with spatially surrounding motion vectors, compression of the data used to represent the motion vector information can be achieved by coding the differential between the current motion vector and a motion vector predictor based upon previously coded, neighboring motion vectors. Typically, chroma motion vectors are derived from luma motion vectors to avoid overhead associated with separately calculating and encoding chroma motion vectors. Some international standards describe deriving chroma motion vectors from luma motion vectors. Section 7.6.3.7 of the MPEG-2 standard describes deriving chroma motion vectors from luma motion vectors In a 4:2:0 macroblock format by dividing each of the horizontal and vertical luma motion vector components by two to scale the chroma .motion vector components appropriately. In a 4:2:2 format, the chroma information is sub-sampled only in the horizontal direction, so the vertical component is not divided by two. In a 4:4:4 format, chroma > information is sampted at the same resolution as luma information, so neither component is divided by two. Annex F of the K263 standard describes an advanced prediction mode that uses four nwtionv«ctonspermaoi)bkx*forpre(flction. In the advanced prediction mode, each of the four motion vectors is used for all pixels in one of the four luminance blocks in the macroblock. The motion vector for both chrominance blocte (in a 4:2:0 format) is derived by calculating the sum of the four luminance motion vector components in each direction and dividing by eight Similarly, section 7.5.5 of the May 28,1998 committee draft of the MPEG-4 standard describes deriving a motion vector MVDCHR *** both <*rorninance blocks in a 4:2:0 famatrjycaJculating the sum of K motion vectors corresponding to K 8x8 block and dividing thesumby2*/C Prediction for chrominance is obtained by applying the nwtkxi vector MVT>Q« to al pixels in the two chrominance blocks. Section 8.4.1.4 of draft JVT-d157 of the JVT/AVC standard also describes deriving chroma motion vectors from luma motion vectors in a 4:2:0 macroblock format by dividing each of the horizontal and vertical luma motion vector components by two. Section 7.4.5 of draft JVT-d157 describes macroblocks with different luma block sizes (e.g., 16x16,16x8,8x16 and 8x8) and associated chroma blocks. For example, for P-slices and SP-slices, "a motion vector is provided for each NxM luma block and the associated chroma blocks." E. Limitations of the Standards These international standards are limited in several important ways. For example, draft JVT-d157 of the JVT/AVC standard and the committee draft of the MPEG-4 standard describe using median prediction to calculate motion vector predictors even when one or more candidate motion vectors are set to 0. Using median prediction when candidates are set to 0 often produces skewed motion vector predictors. The standards also do not describe predicting macroblocks with four coded field motion vectors, which places restrictive limitations on the spatial adaptivity of motion estimation and compensation. Furthermore, draft JVT-d157 performs interlaced coding and decoding through the use of macrobtock pairs rather than through individual interlaced macrobiocks, which limits the adaptfvity of field-coding/frame-coding within a picture. As another example, although the standards provide for signaling of macrobtock types, field/frame coding type information is signaled separately from motion compensation types (e.g., field prediction or frame prediction, one motion vector or multiple motion vectors, etc.). As another example, although some international standards allow for bitrate savings by skipping certain macrobiocks, the skipped macrobtock condition in these standards only indicates that no further information for the macrobtock is encoded, and fails to provide other potentially valuable information about the macrobtock. As another example, several standards use chroma motion vectors that do not sufficiently represent local changes in chroma motion. Another proMem is inefficient rounding mechanisms in chroma motion vector derivation, especially for flekJ-coded content Given the critical importance of video compression and decompression to digital video, ft is not surprising that video compression and decompression are ricnly developed fieids. Whatever the benefits of previous vkleo compression and decompression techniques, however, they do not have the advantages of the following techniques and tools. SUMMARY In summary, the detailed description is directed to various techniques and tools for encoding and decoding video frames. !3eecribedembcn vedcro fc< a niacrobloc* and derives four chroma motion vectors for the macroblock. The deriving can comprise sub-sampling and/or rounding at least a portion of the luma motion vector information using a field-based rounding table. in another aspect, a decoder derives a chroma motion vector associated with at least part of a macroblock in an interlaced frame coded pictures (e.g., interlaced P-frame, interlaced B-frame) for each of one or more luma motion vectors, based at least in part on motion vector information for the one or more luma motion vectors. The decoder is operable to decode nriaxroblocks predicted ustog four luma field motion vectors. For example, the decoder derives four chroma field motion vectors, such as by applying a field-based rounding lookup table to at leastapcan«)tkxivedcrlrrforrnatlon. In another aspect, a decoder derives a chroma motion vector associated with at least part of a macroblock for each of one or more hima field motion vectors by rounding a luma field motion vector component using a field-based rounding table (e.g., an integer array) and sub-sampling the luma field motion vector component The various techniques and tools can be used in combination or independently. Additional features and advantages win be made apparent from the following detailed description of different embodiments that proceeds with reference to the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a diagram showing motion estimation in a video encoder according to the prior art Figure 2 Is a diagram showing block-based compression for an 8x8 block of prediction residuals in a video encoder according to the prior art Figure 3 is a diagram showing block-based decompression for an 8x8 block of prediction residuals in a video encoder according to the prior art Figure 4 is a diagram showing an interlaced frame according to the prior art Figures SA and 6B are diagrams showing locations of macroblocks for candidate motion vector predictors for a 1MV macroblock in a progressive P-frame according to the prior art Figures 6A and 6B are diagrams showing locations of blocks for candidate motion vector predictors for a 1MV macroblock in a mixed 1MV/4MV progressive P-frame according to the prior art. Figures 7A, 7B, 8A, 8B, 9, and 10 are diagrams showing the locations of blocks for candidate motion vector predictors for a block at various positions in a 4MV macroblock in a mixed 1MV/4MV progressive P-frame according to the prior art. Figure 11 fe a diagram showing candidate motion vector predictors for a current frame-coded macroblock in an interlaced P-frame according to the prior art Figures 12 and 13 are diagrams showing candidate motion vector predictors for a current field-coded macroblock in an interlaced P-frame according to the prior art Figure 14 is a code diagram showing pseudo-code for performing a median-of-3 calculation accordng to the prior art Figure 15 Is a diagram showing a B-frame with past and future reference frames according to the prior art Figure 16 Is a diagram showing a 4:2:0 YUV sampling grid according to the prior art Figure 17 is a code cfiagram showing pseudocode for deriving a chroma motion vector from motion Information hi four luma blocks of a 4MVmaafoblock according to the prior art Figure 18 is a block diagram of a suitable computing environment in conjunction with which several described embodiments may be implemented. Figure 19 is a block diagram of a generalized video encoder system in conjunction with which several described embodiments may be implemented. Figure 20 is a block diagram of a generalized video decoder system in conjunction with which several described embodiments may be implemented. Figure 21 is a diagram of a macroblock format used in several described embodiments. Figure 22A is a diagram of part of an interlaced video frame, showing alternating lines of a top field and a bottom field. Figure 22B is a diagram of the interlaced video frame organized for encoding/decoding as a frame, and Figure 22C is a diagram of the interlaced video frame organized for encoding/decoding as fields. Figure 23 b a diagram showing motion vectors for luminance blocks and derived motion vectors for chrominance blocks in a 2 field MV macroblock of an Interlaced P-frame. Figure 24 is a diagram showing different motion vectors for each of four luminance blocks, and derived motion vectors for each of four chrominance sub-blocks, in a 4 frame MV macroblock of an interlaced P-frame. Figure 25 is a diagram showing motion vectors for luminance blocks and derived motion vectors for chrominance blocks in a 4 field MV macroblock of an interlaced P-frame. Figures 26A-26B are diagrams showing candidate predictors for a current macroblock of an interlaced P-frame. Figure 27 is a flow chart showing a technique for processing a macroblock having four field motion vectors in an interlaced P-frame. Figure 28 is a flow chart showing a technique for calculating motion vector predictors for a field-coded macroblock based on the polarity of candidate motion vectors. Figure 29 is a flow chart showing a technique for determining whether to perform a median operation when calculating a motion vector predictor for a field motion vector. Figure 30 is a flow chart showing a technique for determining whether to skip coding of particular macroblocks in an interlaced predicted frame. Figure 31 is a flow chart showing a technique for decoding jointly coded motion compensation type information and field/frame coding type information for a macroblock in an interlaced P-frame. Figure 32 is a flowchart showing a technique for deriving one chroma motion vector for each of the plural luma motion vectors. Figure 33 is a flow chart showing a technique for using a field-based rounding lookup table to derive chroma field motion vectors. Figure 34 is a code diagram showing pseudocode for deriving chroma motion vector components from luma motion vector components using a field-based rounding lookup table. Figure 35 is a diagram showing field designations for values In afield-based rounding lookup table. Figure 36 is a diagram showing an entry-point-layer bftstream syntax in a combined implementation. Figure 37 is a diagram showing a frame-layer bitstream syntax for interlaced P-frames in a combined implementation. Figure 38 is a diagram showing a frame-layer bitstream syntax for interlaced B-frames In a combined implementation. Figure 39 is a diagram showing a frame-layer bitstream syntax for interlaced P-fieids or B-fields in a combined implementation. Figure 40 is a diagram showing a macroblock-layer bitstream syntax for macroblocks of interlaced P-frames in a combined implementation. Figure 41 is a code listing showing pseudo-code for collecting candidate motion vectors for 1MV macroblocks in an interlaced P-frame in a combined implementation. Figures 42,43,44, and 45 are code listings showing pseudo-code for collecting candidate motion vectors for 4 Frame MV macroblocks in an interlaced P-frame in a combined implementation. Figures 46 and 47 are code listings showing pseudo-code for collecting candidate motion vectors for 2 Field MV macroblocks in an interlaced P-frame in a combined implementation. Figures 48,49,50, and 51 are code listings showing pseudo-code for collecting candidate motion vectors for 4 Field MV macroblocks in an interlaced P-frame in a combined implementation. Figure 52 is a code feting showing pseudo-code for computing motion vector predictors for frame motion vectors in an interlaced P4rame in a combined implementation. Figure 53 is a code feting showing pseudo-code for computing motion vector predictors for field motion vectors in an interlaced P-frame in a combined implementation. Figure 54A and 54B are code listings showing pseudo-code for decoding a motion vector differential for interlaced P-mames in a combined implementation. Figures 55A - 56C are diagrams showing tiles for Norm-6 and Diff-6 bitplane coding modes in a combined implementation. DETAILED DESCRIPTION The present appfcation relates to techniques and tools for efficient compression and decompression of interlaced and progressive video. In various described embodiments, a video encoder and decoder incorporate techniques for encoding and decoding interlaced and progressive video, and corresponding signaling techniques for use with a bit stream format or syntax comprising different layers or levels (e.g., sequence level, frame level, field level, macroblock level, and/or block level}. Various alternatives to the implementations described herein are possible. For example, techniques described with reference to flowchart diagrams can be altered by changing the ordering of stages shown in the flowcharts, by repeating or omitting certain stages, etc. As another example, although some implementations are described with reference to specific macroblock formats, other formats also can be used. Further, techniques and tools described with reference to forward prediction may also be applicable to other types of prediction. The various techniques and tools can be used in combination or independently. Different embodiments implement one or more of the described techniques and tods. Some techniques and tools described herein can be used in a video encoder or decoder, or in some other system not specifically limited to video encoding or decoding. L Computing Environment Figure 18 illustrates a generalized example of a suitable computing environment 1800 in which several of the described embodiments may be implemented. The computing environment 1800 is not intended to suggest any limitation as to scope of use or functionality, as the techniques and tools may be implemented in diverse general-purpose or special-purpose computing environments. With reference to Figure 18, the computing environment 1800 Includes at least one processing unit 1810 and memory 1820. In Figure 18, this most basic configuration 1830 is included within a dashed line. The processing unit 1810 executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. The memory 1820 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, eta), or some combination of the two. The memory 1820 stores software 1880 implementing a video encoder or decoder with one or more of the described techniques and tools. A computing environment may have additional features. For example, the computing environment 1800 includes storage 1840, one or more input devices 1850, one or more output devices 1860, and one or more communication connections 1870. An interconnection mechanism (not shown) such as a bus, controBer, or network interconnects the components of the computing environment 1800. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 1800, and coordinates activities of the components of the computing environment 1800. The storage 1840 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment 1800. The storage 1840 stores instructions for the software 1880 Implementing the video encoder or decoder. The input devfce(s) 1850 may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment 1800. For audio or video encoding, the input device(s) 1850 may be a sound card, video card, TV tuner card, or similar device that accepts audio or video input in analog or digital form, or a CD-ROM or CD-RW that reads audio or video samples into the computing environment 1800. The output device(s) 1860 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 1800. The communication connection) 1870 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier. The techniques and tools can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment By way of example, and not limitation, with the computing environment 1800, computer-readable media include memory 1820, storage 1840, communication media, ami combinations of any of the above. The techniques and tools can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, eta that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or spit between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment For the sake of presentation, the detailed description uses terms like "estimate," "compensate," "predict,'' and "apply" to describe computer operations in a computing environment These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation. II. Generalized Video Encoder and Decoder Figure 19 is a block diagram of a generalized video encoder 1900 in conjunction with whksorneoscribedembocnmentemeybeinlemented. Figure 20 is a block diagram of a generalized video decoder 2000 in conjunction with which some described embodiments may be implemented. The relationships shown between modules within the encoder 1900 and decoder 2000 indicate general flows of information in the encoder and decoder; other relationships are not shown for the sake of simplicity. In particular, Figures 19 and 20 usually do not show side information indicating the encoder settings, modes, tables, etc. used for a video sequence, picture, macroblock, block, etc. Such side Information is sent in the output bitstream, typically after entropy encoding of the side information. The format of the output bitstream can be a Windows Media Video version 9 format or other format The encoder 1900 and decoder 2000 process video pictures, which may be video frames, video fields or combinations of frames and fields. The bitstream syntax and semantics at the picture and macrobtock levels may depend on whether frames or fields are used. There may be changes to macroblock organization and overall timing as well. The encoder 1900 and decoder 2000 are block-based and use a 4:2:0 macroblock format for frames, with each macroblock inducing four 8x8 luminance blocks (at times treated as one 16x16 macroblock) and two 8x8 chrominance blocks. For fields, the same or a different macroblock organization and format may be used. The 8x8 blocks may be further sub-divided at different stages, e.g., at the frequency transform and entropy encoding stages. Example video frame organizations are described in more detafl below. Alternatively, the encoder 1900 and decoder 2000 are object-based, use a different macroblock or block format, or perform operations on sets of pixels of different size or configuration than 8x8 blocks and 16x16 macroblocks. Depending on implementation and the type of compression desired, modules of the encoder or decoder can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with Ike modules. In alternative embodiments, encoders or decoders with different modules and/or other configurations of modules perform one or more of the described techniques. A. Video Frame Organizations In some Implementations, the encoder 1900 and decoder 2000 process video frames organized as foflows. A frame contains lines of spatial information of a video signal. For progressive video, these hies contain samples starting from one time instant and continuing through successive lines to the bottom of the frame. A progressive video frame is divided into macroblocks such as the macroblock 2100 shown In Figure 21. The macroblock 2100 includes four 8x8 luminance blocks (Y1 through Y4) and two 8x8 chrominance blocks that are co-located with the four luminance blocks but half resolution horizontally and vertically, following the conventional 4:2:0 macroblock format The 8x8 blocks rnay be furtf»ersul>divkted at eBfferent stages, e.g., at the frequency transform (e.g., 8x4,4x8 or 4x4 DCTs) and entropy encoding stages. A progressive l-frame is an intra-coded progressive video frame. A progressive P-frame is a progressive video frame coded using forward prediction, and a progressive B-frame is a progressive video frame coded using bi-directional prediction. Progressive P- and B-frames may include intra-coded macroblocks as well as different types of predicted macroblocks. An interlaced video frame consists of two scans of a frame - one comprising the even lines of the frame (the top field) and the other comprising the odd lines of the frame (the bottom field). The two fields may represent two different time periods or they may be from the same time period. Figure 22A shows part of an interlaced video firame 2200, including the alternating lines of the top field and bottom field at the top left part of the interlaced video frame 2200. Figure 22B shows the interlaced video frame 2200 of Figure 22A organized for encoding/decoding as a firame 2230. The interlaced video frame 2200 has been partitioned into macroblocks such as the macroblocks 2231 and 2232, which use a 4:2:0 format as shown in Figure 21. In the luminance plane, each macroblock 2231,2232 includes 8 lines from the top field alternating with 8 lines from the bottom field for 16 ines total, and each line is 16 pixels long. (The actual organization and placement of luminance blocks and chrominance blocks within the macroblocks 2231,2232 are not shown, and in fact may vary for different encoding decisions.) Within a given macroblock, the top-field information and bottom-field information may be coded jointly or separately at any of various phases. An interlaced l-frame is two intra-coded fields of an interlaced video frame, where a macroblock includes information for the two fields. An interlaced P-frame is two fields of an interlaced video frame coded using forward prediction, and an interlaced B-frame is two fields of an interlaced video frame coded using bidirectional prediction, where a macroblock includes information for the two fields. Interlaced P-and B-frames may include intra-coded macroblocks as well as different types of predicted macroblocks. Interlaced Bl-frames are a hybrid of interlaced l-frames and interlaced B-frames; they are intra-coded, but are not used as anchors for other frames. Figure 22C shows the interlaced video frame 2200 of Figure 22A organized for encoding/decoding as fields 2260. Each of the two fields of the interlaced video frame 2200 Is partitioned into macroblocks. The top field Is partitioned into macroblocks such as the macroblock 2261, and the bottom field is partitioned into macroblocks such as the macroblock 2262. (Again, the macroblocks use a 42.-0 format as shown in Figure 21, and the organization and placement of luminance blocks and chrominance blocks within the macroblocks are not shown.) In the luminance plane, the macroblock 2261 includes 16 lines from the top field and the macroblock 2262 includes 16 lines from the bottom field, and each line is 16 pixels long. An interlaced l-field is a singfe, separately represented field of an interlaced video frame. An interlaced P-fiekJ is a single, separately represented field of an Interfaced video frame coded using forward prediction, and an interlaced B-field is a single, separately represented field of an Interlaced video frame coded using bi-directional prediction. Interlaced P- and B-fields may include intra-coded macroblocks as well as different types of predicted macroblocks. Interlaced Bl-fieWs are a hybrid of interfaced l-fiekte and interfaced B-fields; they are intra-coded, but are not used as anchors for other fields. Interlaced video frames organized for encoding/decoding as fields can include various combinations of different field types. For example, such a frame can have the same field type in both the top and bottom fields or different field types In each field. In one implementation, the possible combinations of field types include l/l, l/P, P/l, P/P, B/B, B/BI, Bl/B, and BI/BI. The term picture generally refers to source, coded or reconstructed image data. For progressive video, a picture is a progressive video frame. For interfaced video, a picture may refer to an interlaced video frame, the top field of the frame, or the bottom field of the frame, depending on the context Alternatively, the encoder 1900 and decoder 2000 are object-based, use a different macrobtock or block format, or perform operations on sets of pixels of different size or configuration than 8x8 blocks and 16x16 macrobiocks. B. Video Encoder Figure 19 is a block diagram of a generalized video encoder system 1900. The encoder system 1900 receives a sequence of video pictures including a current picture 1905 (e.g., progressive video frame, Interlaced video frame, or field of an interlaced video frame), and produces compressed video information 1995 as output Particular embodiments of video encoders typically use a variation or supplemented version of the generalized encoder 1900. The encoder system 1900 compresses predicted pictures and key pictures. For the sake of presentation, Figure 19 shows a path for key pictures through the encoder system 1900 and a path for predicted pictures. Many of the components of the encoder system 1900 are used for compressing both key pictures and predicted pictures. The exact operations performed by those components can vary depending on the type of information being compressed. A predicted picture (e.g., progressive P-frame or B-frame, interlaced P-field or B-field, or interlaced P-frame or B-frame) is represented in terms of piBdkrfkM) (or differerH») from one or rrKxeotner pictures (which are typicany referred to as referer A prediction residual b the difference between what was predicted and the original picture. In contrast, a key picture (e.g., progressive l-frame, interlaced Weld, or interlaced Mrame) is compressed without reference to other pictures. If the current picture 1905 is a forward-predicted picture, a motion estimator 1910 estimates motion of macrobiocks or other sets of pixels of the current picture 1905 with respect to one or more reference pictures, for example, the reconstructed previous picture 1925 buffered in the picture store 1920. If the current picture 1905 is a bi-directionally-predicted picture, a motion estimator 1910 estimates motion in the current picture 1905 with respect to up to four reconstructed reference pictures (for an interlaced B-field, for example). Typically, a motion estimator estimates motion in a B-plcture with respect to one or more temporally previous reference pictures and one or more temporally future reference pictures. Accordingly, the encoder system 1900 can use the separate stores 1920 and 1922 for multiple reference pictures. For more information on progressive B-frames and interlaced B-frames and B-fields, see U.S. Patent Application Serial No. 10/622,378, entitled, "Advanced Bi-Directional Predictive Coding of Video Frames," filed July 18,2003, and U.S. Patent Application Serial No. 10/882,135, entitled, "Advanced Bi-Directional Predictive Coding of Interlaced Video," filed June 29,2004. The motion estimator 1910 can estimate motion by pixel, 14 pixel, V* pixel, or other increments, and can switch the resolution of the motion estimation on a picture-by-picture basis or other basis. The motion estimator 1910 (and compensator 1930) also can switch between types of reference picture pixel interpolation (e.g., between bicubic and bilinear) on a per-frame or other basis. The resolution of the motion estimation can be the same or different horizontally and vertically. The motion estimator 1910 outputs as side information motion information 1915 such as differential motion vector information. The encoder 1900 encodes the motion information 1915 by, for example, computing one or more predictors for motion vectors, computing differentials between the motion vectors and predictors, and entropy coding the differentials. To reconstruct a motion vector, a motion compensator 1930 combines a predictor with differential motion vector information. Various techniques for computing motion vector predictors, computing differential motion vectors, and reconstructing motion vectors are . described below. The motion compensator 1930 applies the reconstructed motion vector to the reconstructed pictured) 1925 to form a nrrotk>n- PMVy) is computed for a field motion vector in one implementation. In the pseudo-code 5300, SameFieldMV [ ] and OppFierfdMV [ ] denote the two sets of candidate motion vectors and NumSameReWMV and NumOppFieldMV denote the number of candidate motion vectors that belong to each set In the example shown in pseudo-code 5300, the number of avalaWe candidate motion vectors in these sets determines whether a median operation will be used to calculate a motion vector predictor. Figure 29 shows a technique 2900 for determining whether to perform a median operation when calculating a motion vector predictor for a field motion vector. At 2910, an encoder/decoder determines the number of vaHd candidate motion vectors for predicting a field motion vector In a current macroblock. At 2920, if there are three valid candidate motion vectors, then at 2930, a median operation can be used during calculation of the predictor. At 2940, if there are not three valid candidates (i.e., there are two or fewer valid candidates), a motion vector predictor is selected from among the available vaHd candidates using another method - a median operation is not used. In the example in the pseudo-code 5300, the encoder/decoder derives the motion vector predictor by performing a medran operation (e.g., mec9an3) on the candidate x-components or y-components when all three candidate motion vectors are valid, and when all three valid candidate motion vectors are of the same polarity. If all three candidate motion vectors are valid, but not all of them are the same polarity, the encoder/decoder chooses the set that has the most candidates to derive the motion vector predictor. In the case where both sets have the same number of candidates, the encoder/decoder uses the set SameFieldMV [ ]. For the case where there are less than three candidates, the encoder/decoder selects the first candidate from the chosen set in a pre-specffied order, m this example, the order of candidate motion vectors in each set starts with candidate A if it exists, followed by candidate B if it exists, and then candidate C if it exists. For example, if the encoder/decoder uses set SameFieldMV [] and if the set SameFieldMV fj contains only candidate B and candidate C, then the encoder/decoder uses candidate B as the motion vector predictor. If the number of valid candidates Is zero, the encoder/decoder sets the predictor to (0,0). Alternatively, motion vector predictors can be computed in ways other than those described above. For example, although pseudo-code 5300 includes a bias toward selecting same-field candidates, the computation of predictors can be adjusted to remove the bias or to use an opposite field bias. As another example, median operations can be used In more situations (e.g., when two vaOd field motion vector candidates are present), or not at all, or a non-median operation for two valid candidates may be an averaging operation. As yet another example, the ordering of the candidates in the sets can be adjusted, or the candidates can aH be stored in one set V. Innovations In llacroblock Information Signaling for Interlaced Frame Coded £!ctyres. Described embodiments include techniques and tools for signaling macroblock information for interlaced frame coded pictures (e.g., interfaced P-frames, interlaced B-frames, etc.). For example, described techniques and tools include techniques and tools for signaling macroblock information for interlaced P-frames, and techniques and tods for using and signaling skipped macroblocks in interlaced P-frames and other interfaced pictures (e.g., interfaced B-frames, interfaced P-fields, interlaced B-fiekJs, etc). Described embodiments implement one or more of the described techniques and tools including, but not limited to, the following: 1. Jointly cooing motion compensation type (e.g., 1 Frame MV, 4 Frame MV, 2 Field My, 4 Field MV, etc), and potentially other information, with field/frame coding type information (e.g., using the macroblock-tevel syntax element MBMODE) for interfaced P-frames. 2. Signaling a macroblock skip condition. The signaling can be performed separately from other syntax elements such as MBMODE. The sidp condition indicates that the macroblock is a 1MV macroblock, has a zero differential motion vector, and has no coded blocks. The skip information can be coded in a compressed bitplane. The described techniques and tools can be used in combination with one another or with other techniques and toots, or can be used independently. A. Skipped Macroblock Signaling In some implementations, an encoder signals skipped macroblocks. For example, an encoder signals a skipped macroblock in an interlaced frame when a macroblock is coded with one motion vector, has a zero motion vector differential, and has no coded blocks (i.e., no residuals for any block). The skip information can coded as a compressed bitplane (e.g., at frame level) or can be signaled on a one bit per macroblock basis (e.g., at macroblock level). The signaling of the skip condition for the macroblock is separate from the signaling of a macroblock mode for the macroblock. A decoder performs corresponding decoding. This definition of a skipped macroblock takes advantage of the observation that when more than one motion vector is used to encode a macroblock, the macroblock is rarely skipped because it is unlikely that all of the motion vector differentials will be zero and that all of the blocks will not be coded. Thus, when a macroblock is signaled as being skipped, the macroblock mode (1MV) is implied from the skip condition and need not be sent for the macroblock. In interlaced P-frames, a 1MV macrobkxk is motion compensated with one frame motion vector. Figure 30 shows a technique 3000 for determining whether to skip coding of particular macrobiocks in an interlaced predicted frame (e.g., an interlaced P-frame, an interlaced B- frame, or a frame comprising interlaced P-fieWs and/or interlaced B-fields). For a given macroblock, the encoder checks whether the macroblock is a 1MV macroblock at 3010. At 3020, if the macroblock is not a 1MV macroblock, the encoder does not skip the macroblock. Otherwise, at 3030, the encoder checks whether the one motion vector for the macroblock is equal to Its causally predicted motion vector (e.g., whether the differential motion vector for the macroblock is equal to zero). At 3040, if the motion for a macroblock does not equal the causaly predicted motion, the encock* does not sldp the nracroblock. Otherwise, at 3050, the encoder checks whether there is any residual to be encoded for the blocks of the macroblock. At 3u60,& there tea reskiuaJ to be coded, the erxxxk* does rot skip m At 3070, if there Is no residual for the blocks of the macroblock, the encoder skips the macroblock. At 3080, the encoder can continue to encode or skip macrobiocks until encoding is done. In one implementation, the macrobtock-tevel SKIPMBBIT field (which can also be labeled SKIPMB, etc.) indicates the skip condition for a macroblock. If the SKIPMBBIT field is 1, then the current macroblock is skipped and no other information is sent after the SKIPMBBIT field. Ontheoe>erharKJfiftheSKIPMBBITf(ekifenot1,theMBMODEfteklisdecc denote the signaling of whether a nonzero 1MV differential motion vector is present or absent Let denote the signaBng of whether the residual of the macroblock is (1) frame-coded; (2) field-coded; or (3) zero coded blocks (Le. CBP = 0). MBMODE signals the following information Jointly: MBMODE = { <1MV, MVP,' Field/Frame transform:*, <2 Field MV, Field/Frame transforms, <4 Frame MV, Field/Frame transform>, <4 Field MV, Field/Frame transforms, }; The case <1MV, MVP=0, CBP=0>, is not signaled by MBMODE, but is signaled by the skip condition. (Examples of signaling this skip condition are provided above in Section VA) In this example, for inter-coded macrobiocks, the CBPCY syntax element is not decoded when tk>nvectorearenot8entexrj0cWyinthebitstream. Rather, they are derived from the kima motion vectors that are encoded and sent for macrobJocks or blocks of a frame. Described embodiments implement one or more of the described techniques and tools including, but not imtted to, the following: 1. An encoder/decoder obtains a one-to-one correspondence between luma and chroma motion vectors in interlace frame coded pictures (e.g., interlaced P-frames, interlaced B-frames, etc.) by deriving a chroma motion vector for each luma motion vector in a macroblock. The chroma motion vectors are then used to motion compensate the respective chroma block or field. 2. An encoder/decoder maintains coherence between the luma and chroma motion vector when the corresponding macroblock is field coded by adding a variable offset (e.g., using a lookup table) to the chroma motion vector after sub-sampling. Although described techniques apply to a 4:2:0 macroblock format In interlaced video, described techniques can be applied to other macroblock formats (e.g., 4:2:2,4:4:4, etc.) and other kinds of video. The described techniques and tools can be used in combination with one another or with other techniques and tools, or can be used independently. A. One-to-one Chroma Motion Vector Correspondence In some implementations, an encoder/decoder derives and uses the same number of chroma motion vectors to predict a macroblock as the number of luma motion vectors used to predict the macroblock. For example, when an encoder/decoder uses one, two or four luma field- or frame-type motion vectors for a given macroblock, the encoder/decoder derives one, two or four chroma motion vectors for the given macroblock, respectively. Such a technique differs from previous encoders and decoders (e.g., in a progressive frame or interlaced P-field context) in which the previous encoder or decoder always derives a single chroma motion vector for any number of luma motion vectors (e.g., one or four) in each macroblock. Figure 32 is a flow chart showing a technique 3200 for deriving a chroma motion vector for each of plural luma motion vectors in a macroblock. At 3210, an encoder/decoder receives plural luma motion vectors for the macroblock. At 3220, the encoder/decoder derives a chroma motion vector for each of the plural luma motion vectors. The number of derived chroma motion vectors varies depending on the number of luma motion vectors used to predict the current macroblock. In seme implementations, an encoder/decoder derives one chroma motion vector for a 1MV macroblock, two field chroma motion vectors for a 2 Field MVitiacroblocIt, four frame chroma motion vectors for a 4 Frame MV macroblock, and four field chroma motion vectors for a 4 Field MV macroblock. For example, referring to again to Figures 23-25, Figure 23 shows corresponding top and bottom field chroma motion vectors derived from luma motion vectors in a 2 Field MV macroblock. The derived top and bottom field chroma motion vectors describe the displacement of the even and odd lines, respectively, of the chroma blocks. Figure 24 shows corresponding frame chroma motion vectors (MV1\ MV2', MV3' and MV4') derived from frame luma motion vectors for each of four blocks in a 4 Frame MV macroblock. The four derived chroma motion vectors describe the respective displacements of the four 4x4 chroma sub-blocks. Figure 25 shows corresponding field chroma motion vectors derived from field luma motion vectors in a 4 Field MV macroblock. Two field chroma motion vectors describe the displacement of each field in the chroma blocks. The lines of the chroma block is subdivided vertically to form two 4x8 regions each having a 4x4 region of top field fines interleaved with a 4x4 region of bottom field lines. For the top field lines, the displacement of the left 4x4 region is described by the top left field chroma block motion vector and the displacement of the right 4x4 region is described by the top right field chroma block motion vector. For the bottom field lines, the displacement of the left 4x4 region is described by the bottom left field chroma block motion vector and the displacement of the right 4x4 region is described by the bottom right field chroma block motion vector. Each of (he chroma block regions can be motion compensated using a derived motion vector. This allows greater resolution in chroma motion compensation than in previous encoders and decoder which derive a single chroma motion vector from any number of luma motion vectors (ag., where one chroma motion vector is derived from four luma motion vectors in a macroblock), typicaly by sub-sampling (also known as "down-sampling") and/or averaging. Alternatively, the encoder/decoder can derive chroma motion vectors from different numbers and/or types of luma motion vectors (e.g., two frame chroma motion vectors from a macroblock encoded with two frame luma motion vectors, more than four chroma motion vectors from a macroblock encoded with more than four luma motion vectors, etc). Or, the chroma motion vectors can be derived in some other way while maintaining a 1:1 correspondence with the number of luma motion vectors tor the macroblock. B. Field-based Rounding in Field-coded Macroblocks in some implementations, an encoder/decoder uses field-based rounding to maintain coherence between luma motion vectors and a chroma motion vectors during chroma motion vector derivation for field-coded macroblocks. Given a luma frame motion vector or field motion vector, an encoder/decoder derives a corresponding chroma frame motion vector or field motion vector to perform motion compensation for a portion (and potentially aO) of the chroma (Cb/Cr) block. In some implementations, chroma motion vector derivation for interlace frame coded pictures (e.g., interlaced P-frames, interlaced B-frames, etc.) comprises rounding and sub-sampling. For example, when deriving chroma motion vectors from luma field motion vectors, the encoder/decoder adds a variable offset (e.g., using a lookup table) to the chroma motion vector after sub-sampling. Figure 33 is a flow chart showing a technique 3300 for using a field-based rounding lookup table to derive a chroma motion vector in a macroblock. At 3310, an encoder/decoder sub-samples a luma field motion vector component (ag., by dividing the y-component value by two in a 4:2:0 macroblock format). At 3320, the encoder/decoder then performs rounding using a field-based rounding lookup table. The pseudocode 3400 in Figure 34 describes how a chroma motion vector component (CMVX, CMVY) Is derived from a luma motion vector component (LMVX, LMVY) in a 4:2:0 macroblock in one implementation. As shown in the pseudocode 3400, the encoder/decoder uses a simple rounding strategy (using rounding lookup table s_RndTbl[ ]) to round up the %-pel positions of the x-component of the motion vector prior to horizontal sub-sampling. The encoder/decoder uses the same rounding lookup table prior to vertical sub-sampBng for the y-component if the macroblock is frame-coded. However, if the macroblock is field-coded, the encoder/decoder treats the y-component of the chroma motion vector differently. In a field-coded macroblock, a chroma motion vector corresponds to the top or bottom field. The top and bottom fields each comprise alternating horizontal Ones of the chroma block. Therefore, in this case the encoder/decoder uses the field-based rounding lookup table s_RndTbiField[ ] shown in the pseudocode 3400. A field-based rounding lookup table allows the encoder/decoder to maintain correct field offsets while rounding, so that the luma and chroma motion vectors map to consistent field offsets. For example, referring to Figure 35, the values 0,0,1,2 and 2,2,3, 8 (top field values 3510 in Figure 35) in s_RndTblFJeld[ ] apply to one field (ag., the top field), and the values 4,4,5,6 and 6,6.7,12 (bottom field values 3520 in Figure 35) apply to the other field (ag., the bottom field) in the macroblock. Alternatively, an encoder/decoder can use a different field-based roundng lookup table or perform rounding and/or sub-sampling in some other way. For example, an encoder/decoder processing macroblocks in different formats could use different sub-sampling factors and/or lookup table values. VII. Combined Implementations A detailed combined implementation for a bitstream syntax, semantics, and decoder are now described, in addition to an alternative combined Implementation with minor differences from the main combined implementation. A. Bitstream Syntax In various combined implementations, data for interlaced pictures is presented In the form of a bitstream having plural layers (e.g., sequence, entry point, frame, field, macroblock, block and/or sub-block layers). In the syntax diagrams, arrow paths show the possible flows of syntax elements. Syntax elements shown with square-edged boundaries indicate fixed-length syntax elements; those with rounded boundaries indicate variable-length syntax elements and those with a rounded boundary within an outer rounded boundary indicate a syntax element (e.g., a bitplane) made up of simpler syntax elements. A fixed-length syntax element is defined to be a syntax element for which the length of the syntax element is not dependent on data in the syntax element itself; the length of a fixed-length syntax element is either constant or determined by prior data in the syntax flow. A lower layer in a layer diagram (e.g., a macroblock layer in a frame-layer diagram) is indicated by a rectangle within a rectangle. Entry-point-level bHstream elements are shown in Figure 36. In general, an entry point marks a position in a bitstream (ag., an l-frame or other key frame) at which a decoder can begin decoding. In other words, no pictures before the entry point in the bitstream are needed to decode pictures after the entry point An entry point header can be used to signal changes in coding control parameters (ag., enabling or disabBng compression toots (ag., in-toop deblocking filtering) for frames folowing an entry point). For interlaced P-frames and B-frames, frame-level bitstream elements are shown in Figures 37 and 38, respectively. Data for each frame consists of a frame header followed by data for the macroblock layer (whether for intra or various inter type macroblocks)-. The bitstream elements that make up the macroblock layer for Interlaced P-frames (whether for intra or various inter type macrobtocks) are shown in Figure 40. Bitstream elements in the macroblock layer for interlaced P-frames maybe present for macroblocks in other interlaced pictures (ag., interlaced B-frames, interlaced P-fieids, interlaced B-fields, eta) For interlaced video frames with interlaced P-fleWs and/or B-fields, frame-level bitstream elements are shown in Figure 39. Data for each frame consists of a frame header followed by data for the field layers (shown as the repeated "FiekJPicLayer" element per field) and data for the macroblock layers (whether for intra, 1MV, or 4MV macroblocks). The following sections describe selected bitstream elements in the frame and macrobtocklavere that are related to signallyAlthough the selected bitstream elements are described in the context of a particular layer, some bitstream elements can be used in more than one layer. 1. Selected Entry Point Layer Elements Loop Fitter (LOOPFILTER) (1 bit) LOOPFILTER is a Boolean flag that Indicates whether loop filtering is enabled for the entry point segment If LOOPFILTER = 0, then loop filtering is not enabled. If LOOPFILTER = 1, then loop filtering is enabled. In an altemative combined implementation, LOOPFILTER is a sequence level element Extended Motion Vectors (EXTENDED_MV) (1 bit) EXTENDED_MV is a 1-bit syntax element that indicates whether extended motion vectors is turned on (value 1) or off (value 0). EXTENDED_MV indicates the possibility of extended motion vectors (signaled at frame level with the syntax element MVRANGE) in P-frames and B-frames. Extended Differential Motion Vector Range (EXTENDED_DMV)(1 bit) EXTENDED_DMV Is a 1-bit syntax element that Is present if EXTENDED_MV = 1. If EXTENDED_DMV is 1, extended differential motion vector range (DMVRANGE) shaB be signaled at frame layer for the P-frames and B-frarnes within the entry point segment If EXTENDED_DMV is 0, DMVRANGE shall not be signaled. FAST UV Motion Comp (FASTUVMC) (1 bit) FASTUVMC is a Boolean Mag that controls the sub-pixel interpolation and rounding of chroma motion vectors. If FASTUVMC = 1, the chroma motion vectors that are at quarter-pel offsets win be rounded to the nearest half or full-pel positions. If FASTUVMC = 0, no special rounding or filtering is done for chroma. The FASTUVMC syntax element is ignored in interlaced P-frames and Interlaced B-frames. Variable Sized Transform (VSTRANSFORM) (1 bit) VSTRANSFORM is a Boolean flag that indicates whether variable-sized transform coding is enabled for the sequence. If VSTRANSFORM = 0, then variable-sized transform coding is not enabled. If VSTRANSFORM = 1, then variable-sized transform coding is enabled. 2. Selected Frame Layer Elements Figures 37 and 38 are diagrams showing frame-level bftstream syntaxes for interlaced P-frames and interlaced B4rames, respectively. Figure 39 is a diagram showing a frame-layer bitstream syntax for frames containing interlaced P-fields, and/or B-fields (or potentially other kinds of Interlaced fields}, Specie bitstream elements are described below. Frame Coding Mode (FCM) (Variable size) FCM is a variable length codeword fVLC] used to indicate the picture cooing type. FCM takes on values for frame coding modes as shown in Table 9 below: Table 9: Frame Coding Mode VLC (TABLE REMOVED) FleU Picture Type (FPTYPE) (3 bits) FPTYPE is a three-bit syntax element present in the frame header for a frame including interlaced P-fteJds and/or interlaced B-fiekte, and potentiaRy other kinds of fields. FPTYPE takes on values for different combinations of field types in the interlaced video frame, according to Table 10 below. Table 10: Field Picture Type FLC (TABLE REMOVED) Picture Type (PTYPE) (variable size) PTYPE is a variable size syntax element present in the frame header for interlaced P-frames ami interfaced B-frames (or other kinds of interlaced frames such as interlaced I-frames). PTYPE takes on values for different frame types according to Table 11 below. Table 11: Picture Type VLC (TABLE REMOVED) If PTYPE Indicates that the frame is skipped then the frame is treated as a P-frame which Is identical to its reference frame. The reconstruction of the skipped frame is equivalent conceptually to copying the reference frame. A skipped frame means that no further data is transmitted for this frame. UV Sampling Format (UVSAMP) (1 bit) UVSAMP is a 1-bit syntax element that is present when the sequence-level field INTERLACE = 1. UVSAMP indicates the type of chroma subsampling used for the current frame. If UVSAMP = 1, then progressive subsampling of the chroma is used, otherwise, interlace subsampling of the chroma is used. This syntax element does not affect decoding of the bitstream. Extended MV Range (MVRANGE) (Variable size) MVRANGE is a variable-sized syntax element present when the entry-point-iayer EXTENDED_MV bit is set to 1. The MVRANGE VLC represents a motion vector range. Extended Differential UV Range (DMVRANGE) (Variable size) DMVRANGE is a variabte-eized syntax element present if the entry-point-iayer syntax element EXTENDED_DMV = 1. The DMVRANGE VLC represents a motion vector differential range. 4 Motion Vector Switch (4A4VSWITCH) (Variable size or 1 bit) For interlaced P-frames, the 4MVSWITCH syntax element is a 1-bit flag. If 4MVSWITCH is set to zero, the macrobtocks in the picture have only one motion vector or two motion vectors, depending on whether the macrobiock has been frame-coded or field-coded, respectively. If 4MVSWITCHIs set to 1, there may be one, two or four motion vectors per macrobiock. Skipped Macrobiock Decoding (SK1PMB) (Variable size) For interlaced P-frames, the SKIPMB syntax element is a compressed bitplane containing information that Mfcates the skipped/not-skipped status of each macrobiock in the picture. The decoded bitplane represents the skipped/hot-skipped status for each macrobiock with 1-bit values. A value of 0 indicates that the macrobiock is not skipped. A value of 1 indicates that the macrobiock is coded as skipped. A skipped status for a macrobiock in interlaced P-frames means that the decoder treats the macrobiock as 1MV with a motion vector differential of zero and a coded block pattern of zero. No other information is expected to follow for a skipped macrobiock. Macrobiock Mode Table (MBMODETAB) (2 or 3 bits) The MBMODETAB syntax element is a fixed-length field. For interlaced P-frames, MBMODETAB is a 2-bit value that indicates which one of four code tables is used to decode the macroblock mode syntax element (MBMODE) in the macroblock layer. There are two sets of four code tables and the set that is being used depends on whether 4MV is used or not, as indicated by the 4MVSWITCH flag. Motion Vector Table (MVTAB) (2 or 3 bits) The MVTAB syntax element is a fixed length field. For interlaced P-frames, MVTAB is a 2-bit syntax element that indicates which of the four progressive (or, one-reference) motion vector code tables are used to code the MVDATA syntax element in the macroblock layer. 2MV Block Pattern Table (2MVBPTAB) (2 bits) The 2MVBPTAB syntax element is a 2-bit value that signals which of four code tables is used to decode the 2MV block pattern (2MVBP) syntax element In 2 Field MV macroblocks. 4MV Block Pattern Table (4MVBPTAB) (2 bits) The 4MVBPTAB syntax element is a 2-bit value that signals which of four code tables is used to decode the 4MV block pattern (4MVBP) syntax element In 4MV macroblocks. For interiaced P-frames, it is present If the 4MVSWITCH syntax element is set to 1. Macrobhck-ievel Transform Type Flag (TTMBF) (1 bit} This syntax element is present in P-frames and B-frames if the sequence-level syntax element VSTRANSFORM = 1. TTMBF is a one-bit syntax element that signals whether transform type cooing is enabled at the frame or macroblock level. If TTMBF = 1, the same transform type is used for all blocks in the frame. In this case, the transform type is signaled in the Frame-level Transform Type (TTFRM) syntax element that follows. If TTMBF = 0, the transform type may vary throughout the frame and is signaled at the macroblock or block levels. Frame-level Transform Type fTTFRM) (2 bits) This syntax element is present in P-frames and B-frames if VSTRANSFORM = 1 and TTMBF = 1. TTFRM signals the transform type used to transform the 8x8 pixel error signal in predicted blocks. The 8x8 error blocks may be transformed using an 8x8 transform, two 8x4 transforms, two 4x8 transforms or four 4x4 transforms. 3. Selected Macroblock Layer Elements Figure 40 is a diagram showing a macrobtock-tevel bitstream syntax for macroblocks interiaced P-frames in the combined implementation. Specific bitstream elements are described below. Data for a macroblock consists of a macroblock header followed by block layer data. BKstream elements in the macrobiock layer for interlaced P-frames (ag., SKIPMBBIT) may potentially be present for macroblocks in other interlaced pictures (ag., interlaced B-frames, etc) Skip MB Bit (SKIPMBBIT) (1 bit) SKIPMBBIT is a 1-bit syntax element present in interlaced P-frame macroblocks and interlaced B-frame macroblocks if the frame-level syntax element SKIPMB indicates that raw mode is used, rf SKIPMBBIT =1, the macrobiock is skipped. SKIPMBBIT also may be labeled as SKIPMB at the macrobiock level. Macrobiock Mode (MBMODE) (Variable shay MBMODE is a variable-size syntax element that jointly specifies macrobiock type (ag., 1 MV, 2 Field MV, 4 Field MV, 4 Frame MV or Intra), field/frame coding type (e.g., field, frame, or no coded blocks), and the presence of differential motion vector data for 1MV macroblocks. MBMODE is explained in detail below and in Section V above. 2MV Block Pattern (2MVBP) (Variable size) 2MVBP is a variable-sized syntax element present in interlaced P-frame and interlaced B-frame macroblocks. In interlaced P-frame macroblocks, 2MVBP is present if MBMODE indicates that the macrobiock has two field motion vectors. In this case, 2MVBP indicates which of the two kima blocks contain non-zero motion vector differentials. 4MV Block Pattern (4MVBP) (Variable size) 4MVBP is a variable-sized syntax element present in interlaced P-field, interlaced B-fiefd, interlaced P-frame and interlaced B-frame macroblocks. In interlaced P-frame, 4MVBP is present if MBMODE indicates that the macrobiock has four motion vectors. In this case, 4MVBP indicates which of the four kima blocks contain non-zero motion vector differentials. FkM Transform Flag (FIELDTX) (1 bit) FIELDTX is a 1-bit syntax present in interlaced B-frame intra-coded macroblocks. This syntax element trioleates whether a macrobiock is frame or field coded (basically, the internal organization of the macrobiock). FIELDTX = 1 indicates that the macrobiock is fieid-coded. Otherwise, the macrobiock is frame-coded. In inter-coded macroblocks, this syntax element can be inferred from MBMODE as explained in detail below and in Section V above. CBP Present Hag (CBPPRESENT) (1 bit) CBPPRESENT is a 1-bit syntax present in Intra-coded macroblocks in interiaced P-frames and interlaced B-frames. If CBPPRESENT is 1, the CBPCY syntax element is present for that macroblock and Is decoded. If CBPPRESENT Is 0, the CBPCY syntax element is not present and shall be set to zero. Coded Block Pattern (CBPCY) (Variable she} CBPCY is a variable-length syntax element indicates the transform coefficient status for each block in the macroblock. CBPCY decodes to a 6-bit field which indicates whether coefficients are present for the corresponding block. For intra-coded macroblocks, a value of 0 in a partklar bit positicii indicates mat the c»nsp<>rK block does not contain any non-zero AC coefficients. A value of 1 indicates that at least one non-zero AC coefficient is present The DC coefficient is still present for each block in all cases. Fc>r inter-coded macroblocks, a value of 0 in a particular bit position Indicates that the conesporKlirigblocfc does not contain any nonzero coefficients. A value of 11ndicates that at least or» nonzero coefficient is present For cases where the bit is 0, no data is encoded for that block. Motion Vector Data (MVDATA) (Variable she) MVDATA is a variable sized syntax element that encodes differentials for the motion vector(s) for the macroblock, the decodhg of which is described In detail in below. MB-level Transform Type (TTMB) (Variable size) TTMB is a variable-size syntax element in P-picture and B-plcture macroblocks when the picture layer syntax element TTMBF = 0. TTMB specifies a transform type, transform type signal level, and subblock pattern. B. Decoding Interlaced P-frames A process for decoding interlaced P-frames in a combined implementation is described below. 1. Macroblock Layer Decoding of Interiaced P-frames In an interiaced P-frame, each macroblock may be motion compensated In frame mode using one or four motion vectors or in field mode using two or four motion vectors. A macroblock that is inter-coded does not contain any intra blocks. In addition, the residual after motion compensation may be coded in frame transform mode or field transform mode. More specifically, the luma component of the residual is re-arranged according to fields if it is coded in field transform mode but remains unchanged in frame transform mode, while the chroma component remains the same. A macroblock may also be coded as intra. Motion compensation may be restricted to not include four (both field/frame) motion vectors, and this is signaled through 4MVSWITCH. The type of motion compensattofrand residual coding is jointly indicated for each macroblock through MBMODE and SKIPMB. MBMODE employs a different set of tables according to 4MVSWITCH. Macrobtocks in. interlaced P-frames are classified into five types: 1 MV, 2 Field MV, 4 Frame MV, 4 Field MV, and Intra. These five types are described in further detail above in Section III. The first four types of macroblock are inter-coded while the last type indicates that the macroblock is intra-coded. The macroblock type is signaled by the MBMODE syntax element in tne macroblock layer along with the skip bit (A skip condition for the macroblock also can be 8kjna»ed at fiBnx; !s\ In a axnpressed Wt plane.) MBMODE jointly encodes macroblock types atong with varkxis pieces of types of macroblock. Skipped Macroblock Signaling The macroblock-tevel SKIPMBBfT field indicates the skip condition for a macroblock. (Additional detail regarding skip conditions and corresponding signaling Is provided in Section V, above.) If the SKIPMBBIT field is 1, then the current macroblock is said to be skipped and there is no other information sent after the SKIPMBBIT field. (At frame level, the SKIPMB field indicates the presence of SKIPMBBIT at macroblock level (in raw mode) or stores skip information in a compressed bit plane. The decoded bKplane contains one bit per macroblock and indicates the skip condition for each respective macroblock.) The skip condition implies that the current macroblock is 1 MV with zero differential motion vector (i.e. the macroblock is motion compensated using its 1 MV motion predictor) and there are no coded blocks (CBP = 0). In an alternative combined implementation, the residual is assumed to be frame-coded for loop filtering purposes. On the other hand, if the SKIPMB field is not 1, the MBMODE field is decoded to indicate the type of macroblock and other information regarding the current macroblock, such as information described in the following section. Macroblock Mode Signaling MBMODE Jointly specifies the type of macroblock (1MV, 4 Frame MV, 2 Field MV, 4 Field MV, or Intra), types of transform for inter-coded macroblock (i.e. field or frame or no coded blocks), and whether there is a differential motion vector for a 1 MV macroblock. (Additional detail regarding signaling of macroblock information is provided in Section V, above.) MBMOOE can take one of 15 possible values: Let denote the signaling of whether a nonzero 1MV differential motion vector is present or absent. Let denote the signaling of whether the residual of the macroblock is (1) frame transform coded; (2) field transform coded; or (3) zero coded blocks (i.e. CBP = 0). MBMOOE signals the following information jointly: MBMODE = { <1MV, MVP, Field/Frame transform*, <2 Field MV, Field/Frame transform*, <4 Frame MV, Field/Frame transform*, <4 Field MV, Field/Frame transform*, }; The case <1MV, MVP=0, CBP=0>, is not signaled by MBMODE, but is signaled by the skip condition. For inter-coded macrobtocks, the CBPCY syntax element is not decoded when 3, a fixed length escape is followed by the code of the complement of the tile. The rectangular tile contains 6 bits of information. Let k be the code associated with the tile, where k = b, 2*. b, is the binary value of the/ *th bit in natural scan order within the tile. Hence 0 2/Norm-6, followed by differential xor with its predictor). Having described and illustrated the principles of our invention with reference to various embodiments, it will be recognized that the various embodiments can be modified in arrangement and detail without departing from such principles. It should be understood that the programs, processes, or methods described herein are not related or limited to any particular type of computing environment, unless indicated otherwise. Various types of general purpose or specialized computing environments may be used with or perform operations in accordance with the teachings described herein. Elements of embodiments shown in software may be implemented in hardware and vice versa. In view of the many possible embodiments to which the principles of our invention may be applied, we claim as our invention aH such embodiments as may come within the scope and spirit of the following claims and equivalents thereto. I/We Claim 1. A motion vector prediction method comprising: determining one or more candidate motion vectors for predicting a field motion vector for a current field of a current macroblock in an interlaced frame coded picture, wherein the current field has a polarity, and wherein the field motion vector and the one or more candidate motion vectors refer to a reference frame; determining a field polarity for each of the one or more candidate motion vectors, including, for each of the one or more candidate motion vectors, determining whether the candidate motion vector, relative to the polarity of the current field, refers to a same polarity field or opposite polarity field in the reference frame; and calculating a motion vector predictor for the field motion vector based at least in part on the one or more field polarities of the one or more candidate motion vectors. 2. The method of claim 1 wherein the calculating the motion vector predictor biases selection of same field candidate motion vectors over opposite field candidate motion vectors, and wherein the calculating the motion vector predictor biases selection of the one or more valid candidate motion vectors in the order: left candidate, top candidate, other candidate. 3. The method of claim 1 further comprising: determining a number of candidate motion vectors; determining a number of same field candidate motion vectors; determining a number of opposite field candidate motion vectors; and as part of the calculating the motion vector predictor, determining the motion vector predictor depending at least in part on the number of candidate motion vectors, the number of same field candidate motion vectors and the number of opposite field candidate motion vectors. 4. The method of claim 1 wherein at least one of the one or more candidate motion vectors is a frame motion vector from a neighboring macroblock. 5. The method of claim 1 wherein at least one of the one or more candidate motion vectors is a field motion vector from a neighboring macroblock. 6. The method of claim 1 wherein, for each of the one or more candidate motion vectors, the determining the field polarity for the candidate motion vector comprises determining whether the candidate motion vector indicates a displacement within the reference frame at the same polarity field or the opposite polarity field. 7. A motion vector prediction method comprising: determining one or more valid candidate motion vectors for predicting a field motion vector for a current field of a current macroblock in an interlaced frame coded picture, wherein the current field has a polarity, and wherein the field motion vector and the one or more valid candidate motion vectors refer to a reference frame; determining a field polarity for each individual valid candidate motion vector of the one or more valid candidate motion vectors, including determining whether, relative to the polarity of the current field, the individual valid candidate motion vector refers to a same polarity field in the reference frame or opposite polarity field in the reference frame; allocating each individual valid candidate motion vector to one of two sets depending on its field polarity; and calculating a motion vector predictor for the field motion vector based at least in part on one or more of the two sets. 8. The method of claim 7 wherein the calculating a motion vector predictor comprises selecting a dominant polarity valid candidate motion vector. 9. The method of claim 7 wherein the two sets consist of an opposite polarity set and a same polarity set. 10. The method of any one of claims 1 to 9 performed during video encoding. 11. The method of any one of claims 1 to 9 performed during video decoding. 12. A method of processing macroblock information, the method comprising: receiving macroblock information for a macroblock in an interlaced P-frame, the macroblock information including a joint code representing motion compensation type and field/frame coding type for the macroblock; and decoding the joint code to obtain both the motion compensation type and the field/frame coding type for the macroblock, wherein the joint code is a variable length code from a variable length code table in which plural different variable length codes represent plural different combinations of values for the motion compensation type and the field/frame coding type, wherein the plural different variable length codes vary in size in relation to respective likelihoods of the plural different combinations of values, and wherein the joint code is one of the plural different variable length codes in the variable length code table. 13. The method of claim 12 wherein the joint code further represents an indicator of presence of a differential motion vector for the macroblock, wherein the macroblock is a one-motion-vector macroblock. 14. The method of claim 12 wherein the motion compensation type is selected from a group comprising: 1MV, 4 Frame MV, 2 Field MV, and 4 Field MV, and wherein the field/frame coding type is selected from a group comprising: field transform, frame transform, and no coded blocks. 15. The method of claim 12 wherein: the receiving the macroblock information is performed during receiving encoded information for the interlaced P-frame, wherein the receiving encoded information further comprises receiving a code table selection syntax element from the bit stream; and the decoding the joint code is performed during decoding the interlaced P-frame, wherein the decoding the interlaced P-frame further comprises selecting the variable length code table from among multiple available variable length code tables based at least in part upon the code table selection syntax element, wherein the multiple available variable length code tables include a first set of tables used when four-motion-vector coding is enabled for the interlaced P-frame and a second set of tables used when four-motion-vector coding is not enabled for the interlaced P-frame, and wherein the code-table selection syntax element indicates a selection among the first set of tables or indicates a selection among the second set of tables.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 7191-DELNP-2009-Form-18-(10-03-2010).pdf 2010-03-10
1 7191-DELNP-2009-RELEVANT DOCUMENTS [15-09-2023(online)].pdf 2023-09-15
2 7191-DELNP-2009-Correspondence-Others-(10-03-2010).pdf 2010-03-10
2 7191-DELNP-2009-RELEVANT DOCUMENTS [26-09-2022(online)].pdf 2022-09-26
3 7191-DELNP-2009-US(14)-HearingNotice-(HearingDate-22-10-2020).pdf 2021-10-03
3 7191-DELNP-2009-Form-3-(05-05-2010).pdf 2010-05-05
4 7191-DELNP-2009-IntimationOfGrant29-01-2021.pdf 2021-01-29
4 7191-DELNP-2009-Correspondence-Others-(05-05-2010).pdf 2010-05-05
5 abstract.jpg 2011-08-21
5 7191-DELNP-2009-PatentCertificate29-01-2021.pdf 2021-01-29
6 7191-delnp-2009-gpa.pdf 2011-08-21
6 7191-DELNP-2009-FORM 3 [06-11-2020(online)].pdf 2020-11-06
7 7191-DELNP-2009-Written submissions and relevant documents [06-11-2020(online)].pdf 2020-11-06
7 7191-delnp-2009-form-5.pdf 2011-08-21
8 7191-DELNP-2009-Correspondence to notify the Controller [25-09-2020(online)].pdf 2020-09-25
8 7191-delnp-2009-form-3.pdf 2011-08-21
9 7191-delnp-2009-form-2.pdf 2011-08-21
10 7191-DELNP-2009-CLAIMS [30-10-2017(online)].pdf 2017-10-30
10 7191-delnp-2009-form-1.pdf 2011-08-21
11 7191-DELNP-2009-COMPLETE SPECIFICATION [30-10-2017(online)].pdf 2017-10-30
11 7191-delnp-2009-drawings.pdf 2011-08-21
12 7191-DELNP-2009-CORRESPONDENCE [30-10-2017(online)].pdf 2017-10-30
12 7191-delnp-2009-description (complete).pdf 2011-08-21
13 7191-delnp-2009-correspondence-others.pdf 2011-08-21
13 7191-DELNP-2009-FER_SER_REPLY [30-10-2017(online)].pdf 2017-10-30
14 7191-delnp-2009-claims.pdf 2011-08-21
14 7191-DELNP-2009-OTHERS [30-10-2017(online)].pdf 2017-10-30
15 7191-delnp-2009-abstract.pdf 2011-08-21
15 Form 3 [01-06-2017(online)].pdf 2017-06-01
16 MTL-GPOA - PRS.pdf ONLINE 2015-03-05
16 Information under section 8(2) [01-06-2017(online)].pdf 2017-06-01
17 FORM-6(PRS)-301-400.69.pdf ONLINE 2015-03-05
17 Form 3 [29-05-2017(online)].pdf 2017-05-29
18 Information under section 8(2) [29-05-2017(online)].pdf 2017-05-29
18 MTL-GPOA - PRS.pdf 2015-03-13
19 7191-DELNP-2009-FER.pdf 2017-05-23
19 MS to MTL Assignment.pdf 2015-03-13
20 FORM-6(PRS)-301-400.69.pdf 2015-03-13
21 7191-DELNP-2009-FER.pdf 2017-05-23
21 MS to MTL Assignment.pdf 2015-03-13
22 Information under section 8(2) [29-05-2017(online)].pdf 2017-05-29
22 MTL-GPOA - PRS.pdf 2015-03-13
23 Form 3 [29-05-2017(online)].pdf 2017-05-29
23 FORM-6(PRS)-301-400.69.pdf ONLINE 2015-03-05
24 Information under section 8(2) [01-06-2017(online)].pdf 2017-06-01
24 MTL-GPOA - PRS.pdf ONLINE 2015-03-05
25 7191-delnp-2009-abstract.pdf 2011-08-21
25 Form 3 [01-06-2017(online)].pdf 2017-06-01
26 7191-delnp-2009-claims.pdf 2011-08-21
26 7191-DELNP-2009-OTHERS [30-10-2017(online)].pdf 2017-10-30
27 7191-delnp-2009-correspondence-others.pdf 2011-08-21
27 7191-DELNP-2009-FER_SER_REPLY [30-10-2017(online)].pdf 2017-10-30
28 7191-DELNP-2009-CORRESPONDENCE [30-10-2017(online)].pdf 2017-10-30
28 7191-delnp-2009-description (complete).pdf 2011-08-21
29 7191-DELNP-2009-COMPLETE SPECIFICATION [30-10-2017(online)].pdf 2017-10-30
29 7191-delnp-2009-drawings.pdf 2011-08-21
30 7191-DELNP-2009-CLAIMS [30-10-2017(online)].pdf 2017-10-30
30 7191-delnp-2009-form-1.pdf 2011-08-21
31 7191-delnp-2009-form-2.pdf 2011-08-21
32 7191-DELNP-2009-Correspondence to notify the Controller [25-09-2020(online)].pdf 2020-09-25
32 7191-delnp-2009-form-3.pdf 2011-08-21
33 7191-delnp-2009-form-5.pdf 2011-08-21
33 7191-DELNP-2009-Written submissions and relevant documents [06-11-2020(online)].pdf 2020-11-06
34 7191-DELNP-2009-FORM 3 [06-11-2020(online)].pdf 2020-11-06
34 7191-delnp-2009-gpa.pdf 2011-08-21
35 7191-DELNP-2009-PatentCertificate29-01-2021.pdf 2021-01-29
35 abstract.jpg 2011-08-21
36 7191-DELNP-2009-IntimationOfGrant29-01-2021.pdf 2021-01-29
36 7191-DELNP-2009-Correspondence-Others-(05-05-2010).pdf 2010-05-05
37 7191-DELNP-2009-US(14)-HearingNotice-(HearingDate-22-10-2020).pdf 2021-10-03
37 7191-DELNP-2009-Form-3-(05-05-2010).pdf 2010-05-05
38 7191-DELNP-2009-RELEVANT DOCUMENTS [26-09-2022(online)].pdf 2022-09-26
38 7191-DELNP-2009-Correspondence-Others-(10-03-2010).pdf 2010-03-10
39 7191-DELNP-2009-RELEVANT DOCUMENTS [15-09-2023(online)].pdf 2023-09-15
39 7191-DELNP-2009-Form-18-(10-03-2010).pdf 2010-03-10
40 7191-DELNP-2009-FORM-27 [10-09-2025(online)].pdf 2025-09-10

Search Strategy

1 Searchstrategies_23-05-2017.pdf

ERegister / Renewals

3rd: 31 Mar 2021

From 03/09/2006 - To 03/09/2007

4th: 31 Mar 2021

From 03/09/2007 - To 03/09/2008

5th: 31 Mar 2021

From 03/09/2008 - To 03/09/2009

6th: 31 Mar 2021

From 03/09/2009 - To 03/09/2010

7th: 31 Mar 2021

From 03/09/2010 - To 03/09/2011

8th: 31 Mar 2021

From 03/09/2011 - To 03/09/2012

9th: 31 Mar 2021

From 03/09/2012 - To 03/09/2013

10th: 31 Mar 2021

From 03/09/2013 - To 03/09/2014

11th: 31 Mar 2021

From 03/09/2014 - To 03/09/2015

12th: 31 Mar 2021

From 03/09/2015 - To 03/09/2016

13th: 31 Mar 2021

From 03/09/2016 - To 03/09/2017

14th: 31 Mar 2021

From 03/09/2017 - To 03/09/2018

15th: 31 Mar 2021

From 03/09/2018 - To 03/09/2019

16th: 31 Mar 2021

From 03/09/2019 - To 03/09/2020

17th: 31 Mar 2021

From 03/09/2020 - To 03/09/2021

18th: 31 Mar 2021

From 03/09/2021 - To 03/09/2022

19th: 03 Aug 2022

From 03/09/2022 - To 03/09/2023

20th: 29 Aug 2023

From 03/09/2023 - To 03/09/2024