Specification
SHARED MEMORY MULTI VIDEO CHANNEL DISPLAY APPARATUS AND
METHODS
Cross Reference to -Related Application
[0001] This application claims the benefit of U.S. Provisional Applications Nos. 60/793,288, filed April 18, 2006, 60/793,276, filed April 18, 2006, 60/793,277, filed April 18, 2006, and 60/793,275, filed April 18, 2006 each disclosure of which is hereby incorporated by reference herein in its entirety.
Background of the Invention
[0002] Traditionally, multi video channel television display screens are equipped with dual channel video processing chips which enable a user to view one or more channels simultaneously on various portions of the display screen. This form of displaying a picture within a picture is commonly referred to as picture-in-picture or PIP. FIG. IA is an example of displaying two channels on various portions of the display screen having an aspect ratio of 4:3. A screen IOOA displays a first channel 112 on the majority portion of the screen
simultaneously with a second channel 122 that is displayed on a substantially smaller portion of the screen.
[0003] A typical television system for generating PIP display IOOA is shown in FIG. 2. Television display system 200 includes, television broadcast signals 202, a hybrid TV tuner 210, baseband inputs 280, a demodulator 220, an MPEG Codec 230, an off-chip storage 240, an off- chip memory 300, video processor 250, and an external component 270 (e.g., a display). Hybrid TV tuner 210 can tune to one or more television channels provided by television broadcast signals 202. Hybrid TV tuner 210 may provide digital television signals to demodulator 220 and analog video signal components (e.g., Composite Video Baseband Signals (CVBS)) to video processor 250.
Additionally, baseband inputs 280 may receive various television signals (e.g., CVBS, S-Video, Component, etc.) and provide them to video processor 250. Other external digital or analog signals (e.g., DVI or High Definition (HD)) may also be provided to video processor 250.
[0004] The video is demodulated by demodulator 220 and is then decompressed by MPEG Codec 230. Some operations required by MPEG Codec 230 may use off-chip storage 240 to store data. The digital signal (s) are then processed by video processor 250, which can be a dual channel processing chip, in order to generate the proper signals 260 for display on external component 270. Video processor 250 may use off-chip memory 300 to perform memory intensive video processing operations such as
noise reducing and de-interlacing; 3D YC seperation and frame rate conversion (FRC) .
[0005] In these PIP applications,- it is generally- perceived that first channel 112 is more important than second channel 122. Typical dual channel processing chips that are used to generate PIP place more quality emphasis on the first channel video pipe, which generates the large display of first channel 112. The second channel video pipe, which generates the smaller display of second channel 122 is of lesser quality in order to reduce costs. For example, 3-D video processing operations, such as de-interlacing, noise reduction, and video decoding, may be implemented on the first channel video pipe while implementing only 2-D video processing operations on the second channel video pipe. 3-D video processing operations refer to operations that process video in the spatial and temporal domains, often buffering one or more frames of video used in the processing operations. In contrast, 2-D video processing operations only process video in the spatial domains, operating only on the current frame of video. [0006] With the advent of wide display screens having an aspect ratio of 16:9, displaying two channels having the same size or an aspect ratio of 4:3 on the same screen has become increasingly higher in demand. This form of application is commonly referred to as picture- and-picture (PAP) . In FIG. IB screen IOOB displays a first channel 110 and a second channel 120 having substantially the same aspect ratio is displayed on a second portion of the screen. in these applications the
first channel should be generated with similar quality as the second channel.
[0007] An implementation of 3-D video processing on both the first and second video channel pipes is therefore needed to produce two high-quality video images. Performing 3-D video processing to produce the desired display generally requires memory intensive operations that have to be performed within a time frame suitable to display the images without loss in quality or integrity. The memory operations increase proportionally with the number of channels that require 3-D video processing. Typical dual video processing chips lack ability to process two video signals with high-quality and are therefore becoming obsolete with the increase in demand to display two channels having high video quality. [0008] One reason that typical dual video processing chips lack in the ability to process multiple high- quality video signals, is the large amount of data bandwidth required between the video processor and the off-chip memory. Traditionally, a portion of the video processing chip pipeline includes a noise reducer and de- interlacer each requiring high data bandwidth with the off-chip memory. [0009] In particular, the noise reducer works primarily by comparing one field (a line of video generated by an interlaced scan, i.e., progressive video) to the next field and removing portions of the field that are not the same in each field. For this reason, the noise reducer requires storage of at least two fields for comparison with a current field. The de-interlacer reads
the two fields that were stored and combines them, thereby reversing the operations of the interlacer. [0010] FIG. 3 illustrates the off-chip memory access operations of the noise reducer and de-interlacer of a typical video processor. A portion of the video processing pipeline includes a noise reducer 330, a de- interlacer 340, and off-chip memory 300, which contains at least four field buffer sections 310, 311, 312, and 313. [0011] During a first clock cycle, noise reducer 330 reads a field buffer section 310 compares it to a video signal 320, produces a new field with reduced noise and writes this field output 322 to two field buffer sections 311 and 312. The contents that were previously stored in field buffer sections 311 and 312 are copied over to field buffer sections 310 and 313, respectively. Thus, at the end of the clock cycle, field output 322 of noise reducer 330 is stored in field buffer sections 311 and 312 and the fields previously stored in field buffer sections 311 and 312 are now in field buffer sections 310 and 313, respectively.
[0012] During the following clock cycle, field buffer section 312 containing the field output from noise reducer 330 from the previous clock cycle is read by de- interlacer 340, field buffer section 313 containing the field output from noise reducer 330 from the clock cycle previous to this clock cycle that was stored in field buffer section 312 is read by de-interlacer 340. Field output 322 of noise reducer 330 of the current clock cycle is also read by de-interlacer 340. De-
-vS''
interlacer 340 processes these field segments and combines them to provide a de-interlaced output 342 to the next module in the video pipeline.
[0013] The exemplary aforementioned video pipeline portions perform these operations for a single channel and its operations would be multiplied for each additional channel. Therefore, since memory access bandwidth increases proportionally with the amount of data that has to be written/read in the same interval, performing noise reduction and de-interlacing on multiple channels would increase the data bandwidth in the same manner. The incredible bandwidth demand of the above video processing operations limit the ability to perform these operations simultaneously. [0014] Therefore, it would be desirable to have systems and methods for reducing memory access bandwidth in various sections of one or more video pipeline stages of one or more channels in order to produce a display having multiple high-quality video channel streams .
Summary of the Invention
[0015] In accordance with the principles of the present invention, methods and apparatus are provided for reducing memory access bandwidth in various sections of one or more video pipeline stages of one or more channels in order to produce a display having multiple high- quality video channel streams. A dual video processor may receive one or more analog or. digital signals which may be in different formats. A dual video decoder (e.g., NTSC/PAL/SECAM video decoder) capable of decoding two simultaneous video signals in one or more video modes may
be provided. In one of the video modes, the dual video decoder may perform time multiplexing to share at least one component such as an analog to digital converter, used in decoding the video signals. [0016] The outputs of the video decoder, or another set of video signals provided by another component in the system, may be provided to signal processing circuitry (e.g., a noise reducer and/or a de-interlacer) . The signal processing circuitry may access a memory device to store various field lines. Some of the stored field lines, that may be needed by the signal processing circuitry, may be shared. The sharing of some stored field lines reduces overall memory bandwidth and capacity requirements. The signal-processing circuitry may be capable of performing multiple field line processing. A set of field line buffers may be provided to store field lines for multiple field segments and may provide the data to the corresponding inputs of the signal processing circuitry. To further reduce storage, some of the field line buffers may also be shared among the signal processing circuitry.
[0017] The outputs of the video decoder, or another set of video signals provided by another component in the system, may be provided to one or more sealers for producing differently scaled video signals. The sealer may be configured to be placed in various slots before the memory, after the memory, or if no memory access is desired either before or after (i.e., between the memory) . If a video signal is to be up-scaled, the sealer may be placed after the memory in order to reduce
the amount of data that is stored to the memory. If a video signal is to be downscaled, the scaler may be placed before the memory in order to reduce the amount of data that is stored to the memory. Alternatively, one scaler may be configured to be placed before the memory while another scaler may be .configured to be placed after the memory thereby providing two video signals that are scaled differently (i.e., one may be up-scaled while the other may be downscaled) while reducing the amount of memory storage and bandwidth.
[0018] The outputs of the video decoder, or another set of video signals provided by another component in the system, may be provided to one or more frame rate conversion units. A blank time optimizer (BTO) may receive data pertaining to a field line of a frame of a video signal at a first clock rate. The BTO may determine the maximum amount of time available before the next field line of the frame is received. Based on this determination the BTO may send or receive the field line of the frame to memory at a second clock rate. The second clock rate used for the memory access may be substantially slower than the first, thereby reducing memory bandwidth and enabling another video signal that may have a shorter amount of available time between field lines to access memory faster. In turn, the BTO essentially distributes memory access from several memory clients (i.e., units requiring memory access) in a way that promotes efficient use of the memory bandwidth. [0019] The video signal outputs of the BTO or another set of video signals provided by another component in the
system, may be provided to an overlay engine for further processing. In the overlay engine, two or more video signals may be overlaid and provided to a color management unit (CMU) . The CMU may receive the overlaid video signal and may process the overlaid video signal in portions. Upon receiving an indication that a portion of the overlaid video signal corresponds to a first video signal, the CMU may process the video signal portion using parameters that correspond to the first video signal portion and provide an output. Alternatively, upon receiving an indication that a portion of the overlaid video signal corresponds to a second video signal, the CMU may process the video signal portion using parameters that correspond to the second video signal portion and provide an output. A multi-plane (M- plane) overlay circuit in the overlay engine may receive two or more video signals, where one of these signals may be provided by the CMU, and provide an overlaid signal. The video signals may include a priority designator, and the overlay circuitry may then overlay the signals based on the priority designator.
[0020] The output of the overlay engine or another set of video signals provided by another component in the system which may be progressive, may be provided to a primary and/or auxiliary output stage. Alternatively, video signals may bypass the overlay engine and be provided to a primary and/or auxiliary output stage. In the primary and/or auxiliary output stages the video signals may undergo format conversion or processing to meet the requirements of a primary and/or auxiliary
device such as, for example a display device and a recording device -
Brief Description of the Drawings
[0021] The above and other objects and advantages of the invention will be apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters refer to like parts throughout, and in which:
[0022] FIGS. IA and IB is exemplary illustration of two channels being displayed on various portions of the same screen;
[0023] FIG. 2 is an illustration of generating PIP display;
[0024] FIG. 3 is an illustration of off-chip memory access operations of a noise reducer and a de-interlacer in a typical video processor;
[0025] FIG. 4 is an illustration of a television display system in accordance with principles of the present invention; [0026] FIG. 5 is a detailed illustration of the functions of an onboard video processing section of a dual video processor in accordance with principles of the present invention;
[0027] FIG. 6 is an illustration of a clock generation system in accordance with principles of the present invention;
[0028] FIGS. 7-9 are illustrations of three modes of generating video signals in accordance with principles of the present invention;
[0029] FIG. 10 is an illustration of an exemplary implementation of using two decoders to generate three video signals in accordance with principles of the present invention; [0030] FIG. 11 is an exemplary timing diagram for time division multiplexing two portions of two video signals in accordance with principles of the present invention; [0031] FIG. 12 is a detailed illustration of the functions of the front end video pipeline of the dual video processor in accordance with principles of the present invention;
[0032] FIG. 13 is an illustration of off-chip memory access operations of a noise reducer and a de-interlacer • in accordance with principles of the present invention; [0033] FIG. 14 is an exemplary illustrative timing diagram of the off-chip memory access operations of a noise reducer and a de-interlacer in accordance with principles of the present invention; [0034] FIG. 15 is an illustration of multiple field line processing in accordance with principles of the present invention;
[0035] FIG. 16 is a detailed illustration of performing frame rate conversion and scaling in accordance with principles of the present invention; [0036] FIG. 17 is an illustration of a sealer positioning module in accordance with principles of the present invention;
[0037] FIG. 18 is an illustrative example of the operation of a BTO multiplexor in accordance with principles of the present invention;
[0038] FIG. 19 is a detailed illustration of the color processing and channel blending (CPCB) video pipeline of the dual video processor in accordance with principles of the present invention; [0039] FIG. 20 is a detailed illustration of the overlay engine in accordance with principles of the present invention;
[0040] FIG. 21 is a detailed illustration of the color management unit in accordance with principles of the present invention; and
[0041] FIG. 22 is a detailed illustration of the back end video pipeline of the dual video processor in accordance with principles of the present invention.
Detailed Description of the Invention [0042] The invention relates to methods and apparatus for reducing memory access bandwidth and sharing memory and other processing resources in various sections of multiple video pipeline stages of one or more channels in order to produce one or more high-quality output signals. [0043] FIG. 4 illustrates a television display system in accordance with the principles of the present invention. The television display system depicted in FIG. 4 may include, television broadcast signals 202, a dual tuner 410, MPEG Codec 230, off-chip storage 240, off-chip memory 300, a dual video processor 400, a memory interface 530 and at least one external component 270. Dual tuner 410 may receive television broadcast signals 202 and produce a first video signal 412 and a second video signal 414. Video signals 412 and 414 may then be provided to a dual decoder 420. Dual decoder 420
is shown to be internal to dual video processor 400, but may alternatively be external to video processor 400. Dual decoder 420 may perform similar functions as decoder 220 (FIG. 2) on first and second video signals 412 and 414. Dual decoder 420 may include at least a multiplexor 424 and two decoders 422. In alternative arrangements, multiplexor 424 and one or two of decoders 422 may be external to dual decoder 420. Decoders 422 provide decoded video signal outputs 426 and 428. It should be understood that decoders 422 may be any NTSC/PAL/SECAM decoders different from MPEG decoders. The inputs to decoders 422 may be digital CVBS, S-Video or Component video signals and the output of decoders 422 may be digital standard definition such as Y-Cb-Cr data signals. A more detailed discussion of the operation of dual decoder 420 is provided in connection with FIGS. 7, 8, 9, and 10.
[0044] Multiplexor 424 may be used to select at least one of two video signals 412 and 414 or any number of input video signals. The at least one selected video signal 425 is then provided to decoder 422. The at least one selected video signal 425 appears in the figure as a single video signal to avoid overcrowding the drawing, however, it should be understood the video signal 425 may represent any number of video signals that may be provided to the inputs of any number of decoders 422. For example, multiplexor 424 may receive 5 input video signals and may provide two of the 5 input video signals to two different decoders 422.
[0045] The particular video signal processing arrangement shown in FIG. 4 may enable the internal dual decoder 420 on dual video processor 400 to be used thereby reducing the cost of using an external decoder which may be required in the time-shifting applications. For example, one of the outputs 426 and 428 of dual decoder 420 may be provided to a 656 encoder 440 to properly encode the video signal to standard format prior to interlacing the video signals. 656 encoder 440 may be used to reduce the data size for processing at a faster clock frequency. For example, in some embodiments, 656 encoder 440 may reduce 16-bits of data, h-sync and v-sync signals to 8-bits for processing at double the frequency. This may be the standard to interface between SD video and any NTSC/PAL/SECAM decoders and MPEG encoders. The encoded video signal 413 may then be provided to an external MPEG Codec 230, for example, via a port on the video processor, to generate a time shifted video signal. Another port, flexiport 450 on dual video processor 400 may be used to receive the time shifted video signal from MPEG Codec 230. This may be desirable to reduce the complexity of the video processor by processing portions of digital video signals outside of the video processor. Moreover, time-shifting performed by MPEG Codec 230 may require operations that include compression, decompression and interfacing with non-volatile mass storage devices all of which may be beyond the scope of the video processor. [0046] Other video signals such as a cursor, an on- screen display, or various other forms of displays other
than broadcast video signals 202 that may be used in at least one external component 270 or otherwise provided to an external component, may also be generated using dual video processor 400. For example, dual video processor 400 may include a graphics port 460 or pattern generator 470 for this purpose.
[0047] The decoded video signals, as well as various other video signals, graphics generator 460, or pattern generator 470, may be provided to selector 480. Selector 480 selects at least one of these video signals and provides the selected signal to onboard video processing section 490. Video signals 482 and 484 are two illustrative signals that may be provided by selector 480 to onboard video processing section 490. [0048] Onboard video processing section 490 may perform any suitable video processing functions, such as de-interlacing, scaling, frame rate conversion, and channel blending and color management. Any processing resource in dual video processor 400 may send data to and receive data from off-chip memory 300 (which may be
SDRAM, RAMBUS, or any other type of volatile storage) via memory interface 530. Each of these function will be described in more detail in connection with the description of FIG. 5. [0049] Finally, dual video processor 400 outputs one or more video output signals 492. Video output signals 492 may be provided to one or more external components 270 for display, storage, further processing, or any other suitable use. For example, one video output signal 492 may be a primary output signal that supports
high-definition TV (HDTV) resolutions, while a second video output signal 492 may be auxiliary output that supports standard definition TV
Documents
Application Documents
| # |
Name |
Date |
| 1 |
2144-MUMNP-2008- FORM 5 (03-10-2008).pdf |
2008-10-03 |
| 1 |
2144-MUMNP-2008- PUBLICATION REPORT.pdf |
2022-04-12 |
| 2 |
2144-MUMNP-2008- FORM 3 (03-10-2008).pdf |
2008-10-03 |
| 2 |
2144-MUMNP-2008- U. S. ASSINGMENT.pdf |
2022-04-12 |
| 3 |
2144-MUMNP-2008- WO- PCT DOCUMENTS.pdf |
2022-04-12 |
| 3 |
2144-MUMNP-2008- FORM 1 (03-10-2008).pdf |
2008-10-03 |
| 4 |
2144-MUMNP-2008-FORM 2.pdf |
2022-04-01 |
| 4 |
2144-MUMNP-2008- CORRESPONDENCE (03-10-2008).pdf |
2008-10-03 |
| 5 |
2144-MUMNP-2008-CORRESPONDENCE(29-3-2010).pdf |
2018-08-09 |
| 5 |
2144-MUMNP-2008- CORRESPONDENCE (10-10-2008).pdf |
2008-10-10 |
| 6 |
2144-MUMNP-2008-CORRESPONDENCE(IPO)-(AB 21)-(8-2-2016).pdf |
2018-08-09 |
| 6 |
2144-MUMNP-2008- CORRESPONDENCE (16-12-2008).pdf |
2008-12-16 |
| 7 |
2144-MUMNP-2008-FORM 3(27-10-2010).pdf |
2010-10-27 |
| 7 |
2144-MUMNP-2008-CORRESPONDENCE(IPO)-(FER)-(15-1-2015).pdf |
2018-08-09 |
| 8 |
2144-MUMNP-2008-FORM 18(29-3-2010).pdf |
2018-08-09 |
| 8 |
2144-MUMNP-2008-CORRESPONDENCE(27-10-2010).pdf |
2010-10-27 |
| 9 |
2144-mumnp-2008-form 6(28-10-2010).pdf |
2010-10-28 |
| 9 |
2144-MUMNP-2008_EXAMREPORT.pdf |
2018-08-09 |
| 10 |
2144-MUMNP-2008-FORM 5(28-10-2010).pdf |
2010-10-28 |
| 10 |
abstract.jpg |
2018-08-09 |
| 11 |
2144-MUMNP-2008-FORM 26(28-10-2010).pdf |
2010-10-28 |
| 11 |
Drawings.pdf |
2018-08-09 |
| 12 |
2144-MUMNP-2008-FORM 2(TITLE PAGE)-(28-10-2010).pdf |
2010-10-28 |
| 12 |
Form-1.pdf |
2018-08-09 |
| 13 |
2144-MUMNP-2008-FORM 1(28-10-2010).pdf |
2010-10-28 |
| 13 |
Form-3.pdf |
2018-08-09 |
| 14 |
2144-MUMNP-2008-CORRESPONDENCE(28-10-2010).pdf |
2010-10-28 |
| 14 |
Form-5.pdf |
2018-08-09 |
| 15 |
2144-MUMNP-2008-CORRESPONDENCE(28-10-2010).pdf |
2010-10-28 |
| 15 |
Form-5.pdf |
2018-08-09 |
| 16 |
2144-MUMNP-2008-FORM 1(28-10-2010).pdf |
2010-10-28 |
| 16 |
Form-3.pdf |
2018-08-09 |
| 17 |
Form-1.pdf |
2018-08-09 |
| 17 |
2144-MUMNP-2008-FORM 2(TITLE PAGE)-(28-10-2010).pdf |
2010-10-28 |
| 18 |
2144-MUMNP-2008-FORM 26(28-10-2010).pdf |
2010-10-28 |
| 18 |
Drawings.pdf |
2018-08-09 |
| 19 |
2144-MUMNP-2008-FORM 5(28-10-2010).pdf |
2010-10-28 |
| 19 |
abstract.jpg |
2018-08-09 |
| 20 |
2144-mumnp-2008-form 6(28-10-2010).pdf |
2010-10-28 |
| 20 |
2144-MUMNP-2008_EXAMREPORT.pdf |
2018-08-09 |
| 21 |
2144-MUMNP-2008-CORRESPONDENCE(27-10-2010).pdf |
2010-10-27 |
| 21 |
2144-MUMNP-2008-FORM 18(29-3-2010).pdf |
2018-08-09 |
| 22 |
2144-MUMNP-2008-CORRESPONDENCE(IPO)-(FER)-(15-1-2015).pdf |
2018-08-09 |
| 22 |
2144-MUMNP-2008-FORM 3(27-10-2010).pdf |
2010-10-27 |
| 23 |
2144-MUMNP-2008- CORRESPONDENCE (16-12-2008).pdf |
2008-12-16 |
| 23 |
2144-MUMNP-2008-CORRESPONDENCE(IPO)-(AB 21)-(8-2-2016).pdf |
2018-08-09 |
| 24 |
2144-MUMNP-2008- CORRESPONDENCE (10-10-2008).pdf |
2008-10-10 |
| 24 |
2144-MUMNP-2008-CORRESPONDENCE(29-3-2010).pdf |
2018-08-09 |
| 25 |
2144-MUMNP-2008-FORM 2.pdf |
2022-04-01 |
| 25 |
2144-MUMNP-2008- CORRESPONDENCE (03-10-2008).pdf |
2008-10-03 |
| 26 |
2144-MUMNP-2008- WO- PCT DOCUMENTS.pdf |
2022-04-12 |
| 26 |
2144-MUMNP-2008- FORM 1 (03-10-2008).pdf |
2008-10-03 |
| 27 |
2144-MUMNP-2008- U. S. ASSINGMENT.pdf |
2022-04-12 |
| 27 |
2144-MUMNP-2008- FORM 3 (03-10-2008).pdf |
2008-10-03 |
| 28 |
2144-MUMNP-2008- PUBLICATION REPORT.pdf |
2022-04-12 |
| 28 |
2144-MUMNP-2008- FORM 5 (03-10-2008).pdf |
2008-10-03 |