Sign In to Follow Application
View All Documents & Correspondence

System And Method For Compressing Video Using Deep Learning

Abstract: A method and system for compressing videos using deep learning is disclosed. The method includes segmenting each of a plurality of frames associated with a video into a plurality of super blocks. The method further includes determining a block size for partition of each of the plurality of super blocks into a plurality of sub blocks, based on a feature of each of the plurality of super blocks using a Convolutional Neural Network (CNN). The method further includes generating a prediction data for each of the plurality of sub blocks based on a motion vector predicted and learned by the CNN. The method further includes determining a residual data for each of the plurality of sub blocks by subtracting the prediction data from an associated original data. The method includes generating a transformed quantized residual data using each of a transformation algorithm and a quantization algorithm.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 March 2019
Publication Number
40/2020
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
bangalore@knspartners.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-11-16
Renewal Date

Applicants

WIPRO LIMITED
Doddakannelli, Sarjapur Road, Bangalore

Inventors

1. SETHURAMAN ULAGANATHAN
# 76/3, South Vaikolkara street, woraiyur (PO), Ramalinganagar, Tiruchinapalli (DT) 620003
2. MANJUNATH RAMACHANDRA
80, sadhana, 2nd main, BSK 3rd stage, Katriguppe East, Bangalore-560085

Specification

Claims:What is claimed is:
1. A method of compressing videos using deep learning, the method comprising:
segmenting, by a video compressing device, each of a plurality of frames associated with a video into a plurality of super blocks based on an element present in each of the plurality of frames and a motion associated with the element;
determining, by the video compressing device, a block size for partition of each of the plurality of super blocks into a plurality of sub blocks, based on a feature of each of the plurality of super blocks using a Convolutional Neural Network (CNN);
generating, by the video compression device, a prediction data for each of the plurality of sub blocks based on a motion vector predicted and learned by the CNN, wherein the CNN predicts the motion vector based on a co-located frames;
determining, by the video compression device, a residual data for each of the plurality of sub blocks by subtracting the prediction data from an associated original data, wherein the associated original data is a bit stream of each of the plurality of sub blocks; and
generating, by the video compressing device, a transformed quantized residual data using each of a transformation algorithm and a quantization algorithm based on a plurality of parameters associated with the residual data such as the compression rate and signal to noise ratio.

2. The method of claim 1, further comprising:
receiving the video from an external computing device through an interface; and
performing pre-processing analytics on the video, wherein the pre-processing analytics comprises at least one of removal of noise or converting of Red Green Blue (RGB) color space to YCbCr color space.

3. The method of claim 1, further comprising training the CNN for each of the plurality of super block based on the feature of a plurality of set of frames associated with a plurality of video compression techniques and a user feedback to the CNN, wherein the feature comprises at least one of a size of the super block and a motion related information.

4. The method of claim 3, further comprising predicting, by the trained CNN, for each of the plurality of super blocks, at least one of a prediction data, the block size, or a motion related information.

5. The method of claim 1, further comprising selecting, by the CNN, a suitable prediction mode, wherein the suitable prediction mode is at least one of an inter mode or an intra mode.

6. The method of claim 5, wherein the inter mode comprises prediction between a frame and at least one adjacent frame within the plurality of frames, and wherein the intra mode comprises prediction within the frame.

7. The method of claim 1, wherein the transformation algorithm and the quantization algorithm is applied to compress the residual data.

8. The method of claim 7, wherein the transformation algorithm is based on at least one of the CNN or a gaussian pulse wavelet.

9. The method of claim 1, further comprising generating a plurality of compressed bit streams for the transformed quantized residual data based on an entropy coding.

10. The method of claim 1, wherein the element comprises at least one of an object present in a frame of the plurality of frames and texture associated with the object.

11. A video compressing device using deep learning, the video compressing device comprising:
a processor; and
a memory communicatively coupled to the processor, wherein the memory stores processor instructions, which, on execution, causes the processor to:
segment each of a plurality of frames associated with a video into a plurality of super blocks based on an element present in each of the plurality of frames and a motion associated with the element;
determine a block size for partition of each of the plurality of super blocks into a plurality of sub blocks, based on a feature of each of the plurality of super blocks using a Convolutional Neural Network (CNN);
generate a prediction data for each of the plurality of sub blocks based on a motion vector predicted and learned by the CNN, wherein the CNN predicts the motion vector based on a co-located frames; , Description:SYSTEM AND METHOD FOR COMPRESSING VIDEO USING DEEP LEARNING
DESCRIPTION
Technical Field
[001] This disclosure relates generally to video compression, and more particularly to a method and system for compressing videos using deep learning.
Background
[002] The importance of video compression has increased manifold due to an exponential increase in on-line streaming and increased volume of video storage on the cloud. In conventional video coding or compressing algorithms, block based compression is a common practice. The video frames may be fragmented into blocks of fixed size for further processing. However, the fragmentation may result in creation of redundant blocks which may increases the computation requirement. Further, use of hybrid video coding methods to decide the prediction modes may complicate the process.
[003] Some of the conventional methods discuss video compression using learned dictionaries, either with ?xed or self-adaptive atoms, plus ?xed transform basis. In such methods, blocks may be represented by weighted dictionaries and transformed basis co-efficient. These conventional methods may implement deep learning for video compression; however, these conventional methods may not use variable block sizes and may set forth the idea of fixed size blocks for processing. This may further result in redundancy in processing as many of the blocks may have the same features.
SUMMARY
[004] In one embodiment, a method of compressing videos using deep learning is disclosed. The method may include segmenting each of a plurality of frames associated with a video into a plurality of super blocks based on an element present in each of the plurality of frames and a motion associated with the element. The method may further include determining a block size for partition of each of the plurality of super blocks into a plurality of sub blocks, based on a feature of each of the plurality of super blocks using a Convolutional Neural Network (CNN). The method may further include generating a prediction data for each of the plurality of sub blocks based on a motion vector predicted and learned by the CNN, where the CNN predicts the motion vector based on a co-located frames. The method may further include determining a residual data for each of the plurality of sub blocks by subtracting the prediction data from an associated original data, wherein the associated original data is a bit stream of each of the plurality of sub blocks. The method may further include generating a transformed quantized residual data using each of a transformation algorithm and a quantization algorithm based on a plurality of parameters associated with the residual data such as the compression rate and Signal to noise ratio.
[005] In another embodiment, a video compressing device in the cloud environment is disclosed. The video compressing device includes a processor and a memory communicatively coupled to the processor, where the memory stores processor instructions, which, on execution, causes the processor to segment each of a plurality of frames associated with a video into a plurality of super blocks based on an element present in each of the plurality of frames and a motion associated with the element. The processor instructions further cause the processor to determine a block size for partition of each of the plurality of super blocks into a plurality of sub blocks, based on a feature of each of the plurality of super blocks using a Convolutional Neural Network (CNN). The processor instructions further cause the processor to generate a prediction data for each of the plurality of sub blocks based on a motion vector predicted and learned by the CNN, where the CNN predicts the motion vector based on a co-located frames. The processor instructions further cause the processor to determine a residual data for each of the plurality of sub blocks by subtracting the prediction data from an associated original data, where the associated original data is a bit stream of each of the plurality of sub blocks. The processor instruction further causes the processor to generate a transformed quantized residual data using each of a transformation algorithm and a quantization algorithm based on a plurality of parameters associated with the residual data such as the compression rate and signal to noise ratio.
[006] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[007] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
[008] FIG. 1 is a block diagram of a system for compressing videos using deep learning, in accordance with an embodiment.
[009] FIG. 2 illustrates a block diagram of an internal architecture of a video compressing device that is configured to compress videos using deep learning, in accordance with an embodiment.
[010] FIG. 3 illustrates a flowchart of a method for compressing videos using deep learning, in accordance with an embodiment.
[011] FIG. 4 illustrates a flowchart of a method for compressing videos using deep learning, in accordance with another embodiment.
[012] FIG. 5 illustrates a flow diagram depicting processing of a video through various components of a video compressing device configured to compress videos using deep learning, in accordance with an embodiment.
[013] FIG. 6 illustrates step wise compressing of a video of a news anchor on a news channel, in accordance with an exemplary embodiment.
[014] FIG. 7 is a block diagram of an exemplary computer system for implementing embodiments.
DETAILED DESCRIPTION
[015] Exemplary embodiments are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following c

Documents

Application Documents

# Name Date
1 201941012297-PROOF OF ALTERATION [14-03-2024(online)].pdf 2024-03-14
1 201941012297-STATEMENT OF UNDERTAKING (FORM 3) [28-03-2019(online)].pdf 2019-03-28
2 201941012297-IntimationOfGrant16-11-2023.pdf 2023-11-16
2 201941012297-REQUEST FOR EXAMINATION (FORM-18) [28-03-2019(online)].pdf 2019-03-28
3 201941012297-POWER OF AUTHORITY [28-03-2019(online)].pdf 2019-03-28
3 201941012297-PatentCertificate16-11-2023.pdf 2023-11-16
4 201941012297-FORM 18 [28-03-2019(online)].pdf 2019-03-28
4 201941012297-FER.pdf 2021-10-17
5 201941012297-FORM 1 [28-03-2019(online)].pdf 2019-03-28
5 201941012297-CLAIMS [29-08-2021(online)].pdf 2021-08-29
6 201941012297-DRAWINGS [28-03-2019(online)].pdf 2019-03-28
6 201941012297-COMPLETE SPECIFICATION [29-08-2021(online)].pdf 2021-08-29
7 201941012297-DRAWING [29-08-2021(online)].pdf 2021-08-29
7 201941012297-DECLARATION OF INVENTORSHIP (FORM 5) [28-03-2019(online)].pdf 2019-03-28
8 201941012297-FER_SER_REPLY [29-08-2021(online)].pdf 2021-08-29
8 201941012297-COMPLETE SPECIFICATION [28-03-2019(online)].pdf 2019-03-28
9 201941012297-OTHERS [29-08-2021(online)].pdf 2021-08-29
9 201941012297-Request Letter-Correspondence [29-03-2019(online)].pdf 2019-03-29
10 201941012297-PETITION UNDER RULE 137 [28-08-2021(online)].pdf 2021-08-28
10 201941012297-Power of Attorney [29-03-2019(online)].pdf 2019-03-29
11 201941012297-Form 1 (Submitted on date of filing) [29-03-2019(online)].pdf 2019-03-29
11 201941012297-FORM 3 [27-08-2021(online)].pdf 2021-08-27
12 201941012297-Proof of Right (MANDATORY) [29-08-2019(online)].pdf 2019-08-29
12 Correspondence by Agent _Form 1_Form 30_04-09-2019.pdf 2019-09-04
13 201941012297-Proof of Right (MANDATORY) [29-08-2019(online)].pdf 2019-08-29
13 Correspondence by Agent _Form 1_Form 30_04-09-2019.pdf 2019-09-04
14 201941012297-Form 1 (Submitted on date of filing) [29-03-2019(online)].pdf 2019-03-29
14 201941012297-FORM 3 [27-08-2021(online)].pdf 2021-08-27
15 201941012297-PETITION UNDER RULE 137 [28-08-2021(online)].pdf 2021-08-28
15 201941012297-Power of Attorney [29-03-2019(online)].pdf 2019-03-29
16 201941012297-OTHERS [29-08-2021(online)].pdf 2021-08-29
16 201941012297-Request Letter-Correspondence [29-03-2019(online)].pdf 2019-03-29
17 201941012297-FER_SER_REPLY [29-08-2021(online)].pdf 2021-08-29
17 201941012297-COMPLETE SPECIFICATION [28-03-2019(online)].pdf 2019-03-28
18 201941012297-DRAWING [29-08-2021(online)].pdf 2021-08-29
18 201941012297-DECLARATION OF INVENTORSHIP (FORM 5) [28-03-2019(online)].pdf 2019-03-28
19 201941012297-DRAWINGS [28-03-2019(online)].pdf 2019-03-28
19 201941012297-COMPLETE SPECIFICATION [29-08-2021(online)].pdf 2021-08-29
20 201941012297-FORM 1 [28-03-2019(online)].pdf 2019-03-28
20 201941012297-CLAIMS [29-08-2021(online)].pdf 2021-08-29
21 201941012297-FORM 18 [28-03-2019(online)].pdf 2019-03-28
21 201941012297-FER.pdf 2021-10-17
22 201941012297-POWER OF AUTHORITY [28-03-2019(online)].pdf 2019-03-28
22 201941012297-PatentCertificate16-11-2023.pdf 2023-11-16
23 201941012297-REQUEST FOR EXAMINATION (FORM-18) [28-03-2019(online)].pdf 2019-03-28
23 201941012297-IntimationOfGrant16-11-2023.pdf 2023-11-16
24 201941012297-STATEMENT OF UNDERTAKING (FORM 3) [28-03-2019(online)].pdf 2019-03-28
24 201941012297-PROOF OF ALTERATION [14-03-2024(online)].pdf 2024-03-14

Search Strategy

1 Search_Strategy_201921012297E_26-02-2021.pdf

ERegister / Renewals

3rd: 15 Feb 2024

From 28/03/2021 - To 28/03/2022

4th: 15 Feb 2024

From 28/03/2022 - To 28/03/2023

5th: 15 Feb 2024

From 28/03/2023 - To 28/03/2024

6th: 15 Feb 2024

From 28/03/2024 - To 28/03/2025

7th: 28 Mar 2025

From 28/03/2025 - To 28/03/2026