Abstract: Techniques related to processing a mixed content video stream to generate progressive video for encoding and/or display are discussed. Such techniques may include determining conversion techniques for various portions of the mixed content video stream and converting the portions based on the determined techniques. The conversion of true interlaced video include content adaptive interlace reversal and the conversion of pseudo-interlaced telecine converted video may include adaptive telecine
DESCRIPTION COMPLETE NOT PASTED
I/WE CLAIM:
1. A computer-implemented method for processing video for encoding and/or
display comprising:
determining a frame format for a frame of a mixed content video stream comprising one or more video formats;
determining a frame group format for a frame group of the mixed content video stream, wherein the frame group comprises the frame;
determining a conversion technique for the frame group based at least in part on the frame group format; and
converting the frame group to a final progressive format based on the determined conversion technique.
2. The method of claim 1, wherein the video formats of the mixed content video stream comprise at least one of a 60 frames per second progressive format, a 30 frames per second progressive format, a 30 frames per second true interlaced format, or a 30 frames per second pseudo-interlaced telecine converted format.
3. The method of claim 1, wherein determining the frame format comprises content analysis of the frame, and wherein the frame format comprises at least one of progressive or interlaced.
4. The method of claim 1, wherein determining the frame format comprises:
determining a plurality of descriptors associated with content of the frame;
evaluating a plurality of comparison tests based on the plurality of
descriptors; and
determining the frame format based on the comparison tests, wherein the frame format comprises at least one of progressive or interlaced.
5. The method of claim 1, wherein determining the frame format comprises:
determining a plurality of descriptors associated with content of the frame;
evaluating, at a first stage, a plurality of comparison tests based on the plurality of descriptors;
determining, at the first stage, whether the frame format is progressive, interlaced, or uncertain based on the comparison tests;
evaluating, when the frame format is uncertain at the first stage, a second stage comprising machine learning based comparison tests; and
determining, at the second stage, whether the frame format is progressive or interlaced based on the machine learning based comparison tests.
6. The method of claim 1, wherein determining the frame group format comprises content analysis of frames of the frame group, and wherein the frame group format comprises at least one of all progressive, all interlaced, or telecined.
7. The method of claim 1, wherein determining the frame group format comprises at least one of determining all frames of the frame group are progressive, determining all frames of the frame group are interlaced, determining a telecine pattern of the frame group, or determining an undetected pattern of the frame group.
8. The method of claim 1, wherein determining the frame group format comprises at least one of determining all frames of the frame group are progressive, determining all frames of the frame group are interlaced, determining a telecine pattern of the frame group, or determining an undetected pattern of the frame group and wherein determining the conversion technique comprises comparing the frame group format to a prior frame group format of a frame group prior to the frame group and a look-ahead frame group format for a frame group subsequent to the frame group.
9.
10. The method of claim 1, wherein the frame group format comprises all progressive frames, wherein determining the conversion technique comprises determining a prior frame group format and a look-ahead frame group format
comprise all progressive frames, and wherein the conversion technique comprises no conversion to generate final frames having the final progressive format.
10. The method of claim 1, wherein the frame group format comprises all interlaced frames, wherein determining the conversion technique comprises determining a prior frame group format and a look-ahead frame group format comprise all interlaced frames, and wherein the conversion technique comprises an adaptive deinterlacing technique.
11. The method of claim 10, wherein the adaptive deinterlacing technique comprises generating missing pixels based on at least one of adaptive line blocks interpolation, vertical non-linear filtering, spatial edge detection based interpolation, or a weighted averaging of interpolated and filtered line blocks and spatial edge detection based interpolation weighted based on motion detection.
12. The method of claim 1, wherein the frame group format comprises an individual telecine pattern, wherein determining the conversion technique comprises determining a prior frame group format and a look-ahead frame group format comprise the individual telecine pattern, wherein the conversion technique comprises an adaptive telecine pattern reversal technique, and wherein the individual telecine pattern comprises at least one of a 3:2 pull down pattern, a 2:3:3:2 pull down pattern, a 4:1 pull down pattern, or a blended pattern.
13. The method of claim 1, wherein the frame group format comprises an undetected pattern, and wherein the conversion technique comprises an adaptive telecine pattern reversal technique comprising generating missing pixels based on at least one of adaptive line blocks interpolation, vertical non-linear filtering, spatial edge detection based interpolation, or a weighted averaging of interpolated and filtered line blocks and spatial edge detection based interpolation weighted based on motion detection.
14. The method of claim 1, wherein the frame group comprises an interlaced
frame and a telecined frame, and wherein determining the frame group format
comprises:
evaluating a plurality of comparison tests, wherein at least a comparison test is based on comparing measures of local texture of the frames of the frame groups and at least a second comparison test is based on comparing a sum of absolute differences between fields of an individual frame of the frame group to a sum of absolute differences between a field of the individual frame and another frame of the frame group.
15. The method of claim 1, wherein the frame group comprises an interlaced
frame and a telecined frame, and wherein determining the frame group format
comprises:
evaluating at least one comparison test, wherein the comparison test is based on comparing a sum of static thresholded differences between blocks of an individual frame of the frame group to sum of static thresholded differences between blocks of another frame of the frame group.
16. The method of claim 1, wherein determining the frame format and the
frame group format comprises:
decoding headers associated with the mixed content video stream, wherein the mixed content video stream comprises an aggregated transport stream, and wherein decoding the headers comprises demultiplexing individual channels of the aggregated transport stream, decoding a first individual channel to generate the mixed content video stream and the headers, the headers comprising the frame format and the frame group format.
17. A computer-implemented method for processing video for encoding and/or
display comprising:
receiving a mixed content video stream comprising a plurality of video formats comprising at least a true interlaced format and a pseudo-interlaced telecine converted format;
determining a first conversion technique for a first segment of the mixed content video stream having the true interlaced format and a second conversion technique for a second segment of the mixed content video stream having the telecined format, wherein the first and second conversion techniques are different; and
converting the mixed content video stream to a progressive video stream based at least in part on the first conversion technique and the second conversion technique.
18. The method of claim 17, wherein the first conversion technique comprises a content adaptive deinterlacer and the second conversion technique comprises an adaptive telecine pattern reverser technique.
19. The method of claim 17, wherein determining the first conversion technique comprises determining a first frame format of a first frame of the first segment and a first frame group format of the first segment.
20. The method of claim 19, wherein determining the first frame format comprises:
determining a plurality of descriptors associated with content of the first frame;
evaluating a plurality of comparison tests based on the plurality of descriptors; and
determining the first frame format based on the comparison tests, wherein the first frame format comprises an interlaced frame.
21. The method of claim 19, wherein determining the first frame format
comprises:
determining a plurality of descriptors associated with content of the first frame;
evaluating, at a first stage, a plurality of comparison tests based on the plurality of descriptors;
determining, at the first stage, whether the first frame format is progressive, interlaced, or uncertain based on the comparison tests;
evaluating, when the first frame format is uncertain at the first stage, a second stage comprising machine learning based comparison tests; and
determining, at the second stage, the first frame is interlaced based on the machine learning based comparison tests.
22. The method of claim 17, wherein determining the second conversion technique comprises determining an individual telecine pattern of the second segment, wherein converting the mixed content video stream comprises reversing the individual telecine pattern, and wherein the individual telecine pattern comprises at least one of a 3:2 pull down pattern, a 2:3:3:2 pull down pattern, a 4:1 pull down pattern or a blended pattern.
23. The method of claim 17, wherein determining the second conversion technique comprises determining the second segment has an undetected pattern and wherein converting the mixed content video stream comprises applying an adaptive interlace reverser to the second segment.
24. The method of claim 17, further comprising:
determining a third segment of the mixed content video stream comprises a progressive format; and
providing the progressive format video segment as output with no conversion.
25. A system for processing video for encoding and/or display comprising:
a memory buffer configured to store video frames; and
a central processing unit coupled to the memory buffer, wherein the central processing unit comprises:
a frame classifier module configured to determine a frame format for a frame of a mixed content video stream comprising one or more video formats;
a frame group classifier module determine a frame group format for a frame group of the mixed content video stream, wherein the frame group comprises the frame;
an adaptive interlace reverser module configured to, when the frame group format comprises an interlaced format, convert the frame group to a first final progressive format; and
an adaptive telecine reverser module configured to, when the frame group comprises a pseudo-interlaced telecine converted format, convert the frame group to a second final progressive format.
26. The system of claim 25, wherein the frame classifier module and the frame group classifier module are configured to determine the frame format prior to the frame group format, and wherein the memory buffer is configured to buffer the frame group, a frame group prior to the frame group, and a frame group subsequent to the frame, the frame group, the frame group prior to the frame group, and the frame group subsequent to the frame group each comprising five frames.
27. The system of claim 25, wherein the frame classifier module is configured determine the frame format by the frame classifier module being configured to:
determine a plurality of descriptors associated with content of the frame;
evaluate a plurality of comparison tests based on the plurality of descriptors; and
determine the frame format based on the comparison tests, wherein the frame format comprises at least one of progressive or interlaced.
28. The system of claim 27, wherein the descriptors comprise at least one of a sum of absolute differences between pairs of fields of the frame, a sum of variable threshold differences between blocks of the frame and blocks of a top field of the frame, a thresholded count of blocks of the frame having a high texture value, a measure of a vertical texture of the frame, a measure of a difference between a dynamic texture and a static texture of the frame, a measure of a difference between a texture level of the frame and an average of texture levels of fields of the frame, or a sum of a static thresholded differences between blocks of the frame and blocks of the top field of the frame.
29. The system of claim 25, wherein the frame classifier module is configured determine the frame format by the frame classifier module being configured to:
determine a plurality of descriptors associated with content of the frame;
evaluate, at a first stage, a plurality of comparison tests based on the plurality of descriptors;
determine, at the first stage, whether the frame format is progressive, interlaced, or uncertain based on the comparison tests;
evaluate, when the frame format is uncertain at the first stage, a second stage comprising machine learning based comparison tests; and
determine, at the second stage, whether the frame format is progressive or interlaced based on the machine learning based comparison tests.
30. The system of claim 25, wherein the frame group format comprises all progressive frames, wherein the frame group classifier module is configured determine a prior frame group format and a look-ahead frame group format both comprising all progressive frames, and wherein the system is configured to output the frame group with no conversion to generate final frames.
31. The system of claim 25, wherein the adaptive interlace reverser module is configured to generate missing pixels based on at least one of adaptive line blocks interpolation, vertical non-linear filtering, spatial edge detection based
interpolation, or a weighted averaging of interpolated and filtered line blocks and spatial edge detection based interpolation weighted based on motion detection.
32. The system of claim 25, wherein the frame group format comprises an individual telecine pattern, wherein the frame group classifier module is configured determine a prior frame group format and a look-ahead frame group format both comprising the individual telecine pattern, wherein the frame group format comprises the pseudo-interlaced telecine converted format, wherein the individual telecine pattern comprises at least one of a 3:2 pull down pattern, a 2:3:3:2 pull down pattern, a 4:1 pull down pattern, or a blended pattern, and wherein the adaptive interlace reverser module is configured to verify the individual telecine pattern, check pattern parity, and reverse the individual telecine pattern to generate final frames having the final progressive format at 24 frames per second.
33.
34. The system of claim 25, wherein the frame group format comprises an undetected pattern, and wherein the adaptive interlace reverser module is configured to convert the frame group to a third final progressive format by the adaptive telecine pattern reversal module being configured to generate missing pixels based on at least one of adaptive line blocks interpolation, vertical non¬linear filtering, spatial edge detection based interpolation, or a weighted averaging of interpolated and filtered line blocks and spatial edge detection based interpolation weighted based on motion detection.
35. At least one machine readable medium comprising a plurality of instructions that, in response to being executed on a computing device, cause the computing device to process video for encoding and/or display by:
determining a frame format for a frame of a mixed content video stream comprising one or more video formats;
determining a frame group format for a frame group of the mixed content video stream, wherein the frame group comprises the frame;
determining a conversion technique for the frame group based at least in part on the frame group format; and
converting the frame group to a final progressive format based on the determined conversion technique.
36. The machine readable medium of 34, wherein determining the frame
format comprises:
determining a plurality of descriptors associated with content of the frame;
evaluating a plurality of comparison tests based on the plurality of descriptors; and
determining the frame format based on the comparison tests, wherein the frame format comprises at least one of progressive or interlaced.
36. The machine readable medium of claim 34, wherein determining the frame
format comprises:
determining a plurality of descriptors associated with content of the frame;
evaluating, at a first stage, a plurality of comparison tests based on the plurality of descriptors;
determining, at the first stage, whether the frame format is progressive, interlaced, or uncertain based on the comparison tests;
evaluating, when the frame format is uncertain at the first stage, a second stage comprising machine learning based comparison tests; and
determining, at the second stage, whether the frame format is progressive or interlaced based on the machine learning based comparison tests.
37. The machine readable medium of claim 34, wherein the frame group
format comprises all progressive frames, wherein determining the conversion
technique comprises determining a prior frame group format and a look-ahead
frame group format comprise all progressive frames, and wherein the conversion
technique comprises no conversion to generate final frames having the final
progressive format.
38. The machine readable medium of claim 34, wherein the frame group format comprises all interlaced frames, wherein determining the conversion technique comprises determining a prior frame group format and a look-ahead frame group format comprise all interlaced frames, and wherein the conversion technique comprises an adaptive deinterlacing technique.
39. The machine readable medium of claim 38, wherein the adaptive deinterlacing technique comprises generating missing pixels based on at least one of adaptive line blocks interpolation, vertical non-linear filtering, spatial edge detection based interpolation, or a weighted averaging of interpolated and filtered line blocks and spatial edge detection based interpolation weighted based on motion detection.
40. The machine readable medium of claim 34, wherein the frame group format comprises an individual telecine pattern, wherein determining the conversion technique comprises determining a prior frame group format and a look-ahead frame group format comprise the individual telecine pattern, and wherein the conversion technique comprises an adaptive telecine pattern reversal technique.
41. The machine readable medium of claim 34, wherein the frame group format comprises an individual telecine pattern, wherein determining the conversion technique comprises determining a prior frame group format and a look-ahead frame group format comprise the individual telecine pattern, wherein the conversion technique comprises an adaptive telecine pattern reversal technique, and wherein the individual telecine pattern comprises at least one of a 3:2 pull down pattern, a 2:3:3:2 pull down pattern, a 4:1 pull down pattern, or a blended pattern.
42. The machine readable medium of claim 34, wherein the frame group format comprises an undetected pattern, and wherein the conversion technique comprises an adaptive telecine pattern reversal technique comprising generating missing pixels based on at least one of adaptive line blocks interpolation, vertical non-linear filtering, spatial edge detection based interpolation, or a weighted averaging of interpolated and filtered line blocks and spatial edge detection based interpolation weighted based on motion detection.
43. The machine readable medium of claim 34, wherein the frame group comprises an interlaced frame and a telecined frame, and wherein determining the frame group format comprises:
evaluating a plurality of comparison tests, wherein at least a first comparison test is based on comparing measures of local texture of the frames of the frame groups and at least a second comparison test is based on comparing a sum of absolute differences between fields of an individual frame of the frame group to a sum of absolute differences between a field of the individual frame and another frame of the frame group.
44. The machine readable medium of claim 34, wherein the frame group
comprises an interlaced frame and a telecined frame, and wherein determining the
frame group format comprises:
evaluating at least one comparison test, wherein the comparison test is based on comparing a sum of static thresholded differences between blocks of an individual frame of the frame group to sum of static thresholded differences between blocks of another frame of the frame group.
45. At least one machine readable medium comprising a plurality of
instructions that, in response to being executed on a computing device, cause the
computing device to process video for encoding and/or display by:
receiving a mixed content video stream comprising a plurality of video formats comprising at least a true interlaced format and a pseudo-interlaced telecine converted format;
determining a first conversion technique for a first segment of the mixed content video stream having the true interlaced format and a second conversion technique for a second segment of the mixed content video stream having the telecined format, wherein the first and second conversion techniques are different; and
converting the mixed content video stream to a progressive video stream based at least in part on the first conversion technique and the second conversion technique.
46. The machine readable medium of claim 45, wherein determining the first conversion technique comprises determining a first frame format of a first frame of the first segment and a first frame group format of the first segment.
47. The machine readable medium of claim 46, wherein determining the first frame format comprises:
determining a plurality of descriptors associated with content of the first frame;
evaluating a plurality of comparison tests based on the plurality of descriptors; and
determining the first frame format based on the comparison tests, wherein the first frame format comprises an interlaced frame.
48. The machine readable medium of claim 46, wherein determining the first
frame format comprises:
determining a plurality of descriptors associated with content of the first frame;
evaluating, at a first stage, a plurality of comparison tests based on the plurality of descriptors;
determining, at the first stage, whether the first frame format is progressive, interlaced, or uncertain based on the comparison tests;
evaluating, when the first frame format is uncertain at the first stage, a second stage comprising machine learning based comparison tests; and
determining, at the second stage, the first frame is interlaced based on the machine learning based comparison tests.
49. The machine readable medium of claim 45, wherein determining the second conversion technique comprises determining an individual telecine pattern of the second segment, wherein converting the mixed content video stream comprises reversing the individual telecine pattern, and wherein the individual telecine pattern comprises at least one of a 3:2 pull down pattern, a 2:3:3:2 pull down pattern, a 4:1 pull down pattern or a blended pattern.
50. The machine readable medium of claim 45, wherein determining the second conversion technique comprises determining the second segment has an undetected pattern and wherein converting the mixed content video stream comprises applying an adaptive interlace reverser to the second segment.
| # | Name | Date |
|---|---|---|
| 1 | Priority Document [11-02-2017(online)].pdf | 2017-02-11 |
| 2 | Form 5 [11-02-2017(online)].pdf | 2017-02-11 |
| 3 | Drawing [11-02-2017(online)].pdf | 2017-02-11 |
| 4 | Description(Complete) [11-02-2017(online)].pdf_372.pdf | 2017-02-11 |
| 5 | Description(Complete) [11-02-2017(online)].pdf | 2017-02-11 |
| 6 | Description(Complete) [11-02-2017 (online)]..pdf | 2017-02-11 |
| 7 | Form5_As Filed_20-02-2017.pdf | 2017-02-20 |
| 8 | Form 3 [27-02-2017(online)].pdf | 2017-02-27 |
| 9 | Form 26 [27-02-2017(online)].pdf | 2017-02-27 |
| 10 | Form 18 [27-02-2017(online)].pdf | 2017-02-27 |
| 11 | Correspondence By Agent_Power Of Attorney_01-03-2017.pdf | 2017-03-01 |
| 12 | Other Patent Document [23-03-2017(online)].pdf | 2017-03-23 |
| 13 | Correspondence by Agent_Proof of Right_27-03-2017.pdf | 2017-03-27 |
| 14 | Other Document [12-04-2017(online)].pdf | 2017-04-12 |
| 15 | Marked Copy [12-04-2017(online)].pdf | 2017-04-12 |
| 16 | Form 13 [12-04-2017(online)].pdf | 2017-04-12 |
| 17 | 201747004954-FORM 3 [12-02-2018(online)].pdf | 2018-02-12 |
| 18 | 201747004954-FER.pdf | 2020-06-11 |
| 19 | 201747004954-FORM 3 [19-11-2020(online)].pdf | 2020-11-19 |
| 20 | 201747004954-OTHERS [30-11-2020(online)].pdf | 2020-11-30 |
| 21 | 201747004954-FER_SER_REPLY [30-11-2020(online)].pdf | 2020-11-30 |
| 22 | 201747004954-CLAIMS [30-11-2020(online)].pdf | 2020-11-30 |
| 23 | 201747004954-PA [10-01-2023(online)].pdf | 2023-01-10 |
| 24 | 201747004954-ASSIGNMENT DOCUMENTS [10-01-2023(online)].pdf | 2023-01-10 |
| 25 | 201747004954-8(i)-Substitution-Change Of Applicant - Form 6 [10-01-2023(online)].pdf | 2023-01-10 |
| 26 | 201747004954-US(14)-HearingNotice-(HearingDate-23-01-2024).pdf | 2023-12-27 |
| 27 | 201747004954-Correspondence to notify the Controller [10-01-2024(online)].pdf | 2024-01-10 |
| 28 | 201747004954-Written submissions and relevant documents [07-02-2024(online)].pdf | 2024-02-07 |
| 29 | 201747004954-PETITION UNDER RULE 137 [07-02-2024(online)].pdf | 2024-02-07 |
| 30 | 201747004954-Annexure [07-02-2024(online)].pdf | 2024-02-07 |
| 31 | 201747004954-PatentCertificate13-02-2024.pdf | 2024-02-13 |
| 1 | SS(201747004954)E_11-06-2020.pdf |