Sign In to Follow Application
View All Documents & Correspondence

Method And System For Performing Context Based Transformation Of A Video

Abstract: Disclosed herein is a method and system for performing context-based transformation of a video. In an embodiment, a scene descriptor and a textual descriptor are generated for each scene corresponding to the video. Further, an audio context descriptor is generated based on semantic analysis of the textual descriptor. Subsequently, the audio context descriptor and the scene descriptor are correlated to generate a scene context descriptor for each scene. Finally, the video is translated using the scene context descriptor, thereby transforming the video based on context. In some embodiments, the method of present disclosure is capable of automatically changing one or more attributes, such as color of one or more scenes in the video, in response to change in the context of audio/speech signals corresponding to the video. Thus, the present method helps in effective rendering of a video to users. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 February 2018
Publication Number
34/2019
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
bangalore@knspartners.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-01-08
Renewal Date

Applicants

WIPRO LIMITED
Doddakannelli, Sarjapur Road, Bangalore 560035, Karnataka, India.

Inventors

1. MANJUNATH RAMACHANDRA
80, Sadhana, 2nd Main, BSK 3rd Stage, Katriguppe East, Bangalore-560085, Karnataka, India.
2. SETHURAMAN ULAGANATHAN
#76/3, South Vaikolkara Street, Woraiyur (PO), RamalingaNagar, Tiruchirapalli (DT) 620003, Tamil Nadu, India.

Specification

Claims:WE CLAIM:

1. A method for performing context-based transformation of a video (102), the method comprising:
generating, by a video transformation system (103), a scene descriptor (107) for each of one or more scenes (105) corresponding to the video (102);
generating, by the video transformation system (103), a textual descriptor (109) for each of one or more speech segments related to the one or more scenes (105);
determining, by the video transformation system (103), an audio context descriptor (111) based on semantic analysis of the textual descriptor (109) of each of the one or more speech segments;
correlating, by the video transformation system (103), the audio context descriptor (111) with the scene descriptor (107) for generating a scene context descriptor (113) for each of the one or more scenes (105); and
translating, by the video transformation system (103), each of the one or more scenes (105) using the scene context descriptor (113) for transforming the video (102).

2. The method as claimed in claim 1 comprises eliminating one or more redundant scenes corresponding to the video (102) upon detecting similar scene descriptor (107) for the one or more scenes (105).

3. The method as claimed in claim 2, wherein similarity among the scene descriptor (107) of the one or more scenes (105) is determined by quantifying divergence between the scene descriptor (107) of one or more consecutive scenes.

4. The method as claimed in claim 1, wherein the scene descriptor (107) is generated using one or more parameters comprising objects present in the one or more scenes (105), actions performed by the objects, and attributes of background of the objects in the one or more scenes (105).

5. The method as claimed in claim 1, wherein the scene descriptor (107) comprises labels and description for objects present in the one or more scenes (105).

6. The method as claimed in claim 1, wherein generating the textual descriptor (109) comprises translating each of the one or more speech segments into corresponding text segments using a predetermined conversion technique.

7. A video transformation system (103) for performing context-based transformation of a video (102), the video transformation system (103) comprising:
a processor (203); and
a memory (205), communicatively coupled to the processor (203), wherein the memory (205) stores processor-executable instructions, which on execution, cause the processor (203) to:
generate a scene descriptor (107) for each of one or more scenes (105) corresponding to the video (102);
generate a textual descriptor (109) for each of one or more speech segments related to the one or more scenes (105);
determine an audio context descriptor (111) based on semantic analysis of the textual descriptor (109) of each of the one or more speech segments;
correlate the audio context descriptor (111) with the scene descriptor (107) to generate a scene context descriptor (113) for each of the one or more scenes (105); and
translate each of the one or more scenes (105) using the scene context descriptor (113) to transform the video (102).

8. The video transformation system (103) as claimed in claim 7, wherein the processor (203) eliminates one or more redundant scenes corresponding to the video (102) upon detecting similar scene descriptor (107) for the one or more scenes (105).

9. The video transformation system (103) as claimed in claim 8, wherein the processor (203) quantifies divergence between the scene descriptor (107) of one or more consecutive scenes to determine similarity among the scene descriptor (107) of the one or more scenes (105).

10. The video transformation system (103) as claimed in claim 7, wherein the processor (203) generates the scene descriptor (107) using one or more parameters comprising objects present in the one or more scenes (105), actions performed by the objects, and attributes of background of the objects in the one or more scenes (105).

11. The video transformation system (103) as claimed in claim 7, wherein the scene descriptor (107) comprises labels and description for objects present in the one or more scenes (105).

12. The video transformation system (103) as claimed in claim 7, wherein the processor (203) translates each of the one or more speech segments into corresponding text segments using a predetermined conversion technique to generate the textual descriptor (109).

Dated this 15th day of February 2018

SWETHA S. N
IN/PA-2123
OF K&S PARTNERS
ATTORNEY FOR THE APPLICANT
, Description:TECHNICAL FIELD
The present subject matter is, in general, related to video processing and more particularly, but not exclusively, to a method and system for performing context-based transformation of a video.

Documents

Application Documents

# Name Date
1 201841005827-ASSIGNMENT WITH VERIFIED COPY [26-02-2024(online)].pdf 2024-02-26
1 201841005827-STATEMENT OF UNDERTAKING (FORM 3) [15-02-2018(online)].pdf 2018-02-15
2 201841005827-FORM-16 [26-02-2024(online)].pdf 2024-02-26
2 201841005827-REQUEST FOR EXAMINATION (FORM-18) [15-02-2018(online)].pdf 2018-02-15
3 201841005827-POWER OF AUTHORITY [26-02-2024(online)].pdf 2024-02-26
3 201841005827-POWER OF AUTHORITY [15-02-2018(online)].pdf 2018-02-15
4 201841005827-IntimationOfGrant08-01-2024.pdf 2024-01-08
4 201841005827-FORM 18 [15-02-2018(online)].pdf 2018-02-15
5 201841005827-PatentCertificate08-01-2024.pdf 2024-01-08
5 201841005827-FORM 1 [15-02-2018(online)].pdf 2018-02-15
6 201841005827-FER.pdf 2021-10-17
6 201841005827-DRAWINGS [15-02-2018(online)].pdf 2018-02-15
7 201841005827-DECLARATION OF INVENTORSHIP (FORM 5) [15-02-2018(online)].pdf 2018-02-15
7 201841005827-ABSTRACT [01-10-2021(online)].pdf 2021-10-01
8 201841005827-COMPLETE SPECIFICATION [15-02-2018(online)].pdf 2018-02-15
8 201841005827-CLAIMS [01-10-2021(online)].pdf 2021-10-01
9 201841005827-CORRESPONDENCE [01-10-2021(online)].pdf 2021-10-01
9 201841005827-REQUEST FOR CERTIFIED COPY [05-03-2018(online)].pdf 2018-03-05
10 201841005827-DRAWING [01-10-2021(online)].pdf 2021-10-01
10 201841005827-Proof of Right (MANDATORY) [24-04-2018(online)].pdf 2018-04-24
11 201841005827-FER_SER_REPLY [01-10-2021(online)].pdf 2021-10-01
11 201841005827-Proof of Right (MANDATORY) [24-04-2018(online)]-1.pdf 2018-04-24
12 201841005827-FORM 3 [01-10-2021(online)].pdf 2021-10-01
12 Correspondence by Agent_Form30_01-05-2018.pdf 2018-05-01
13 201841005827-OTHERS [01-10-2021(online)].pdf 2021-10-01
13 201841005827-PETITION UNDER RULE 137 [01-10-2021(online)].pdf 2021-10-01
14 201841005827-OTHERS [01-10-2021(online)].pdf 2021-10-01
14 201841005827-PETITION UNDER RULE 137 [01-10-2021(online)].pdf 2021-10-01
15 201841005827-FORM 3 [01-10-2021(online)].pdf 2021-10-01
15 Correspondence by Agent_Form30_01-05-2018.pdf 2018-05-01
16 201841005827-FER_SER_REPLY [01-10-2021(online)].pdf 2021-10-01
16 201841005827-Proof of Right (MANDATORY) [24-04-2018(online)]-1.pdf 2018-04-24
17 201841005827-Proof of Right (MANDATORY) [24-04-2018(online)].pdf 2018-04-24
17 201841005827-DRAWING [01-10-2021(online)].pdf 2021-10-01
18 201841005827-CORRESPONDENCE [01-10-2021(online)].pdf 2021-10-01
18 201841005827-REQUEST FOR CERTIFIED COPY [05-03-2018(online)].pdf 2018-03-05
19 201841005827-CLAIMS [01-10-2021(online)].pdf 2021-10-01
19 201841005827-COMPLETE SPECIFICATION [15-02-2018(online)].pdf 2018-02-15
20 201841005827-ABSTRACT [01-10-2021(online)].pdf 2021-10-01
20 201841005827-DECLARATION OF INVENTORSHIP (FORM 5) [15-02-2018(online)].pdf 2018-02-15
21 201841005827-DRAWINGS [15-02-2018(online)].pdf 2018-02-15
21 201841005827-FER.pdf 2021-10-17
22 201841005827-FORM 1 [15-02-2018(online)].pdf 2018-02-15
22 201841005827-PatentCertificate08-01-2024.pdf 2024-01-08
23 201841005827-FORM 18 [15-02-2018(online)].pdf 2018-02-15
23 201841005827-IntimationOfGrant08-01-2024.pdf 2024-01-08
24 201841005827-POWER OF AUTHORITY [15-02-2018(online)].pdf 2018-02-15
24 201841005827-POWER OF AUTHORITY [26-02-2024(online)].pdf 2024-02-26
25 201841005827-REQUEST FOR EXAMINATION (FORM-18) [15-02-2018(online)].pdf 2018-02-15
25 201841005827-FORM-16 [26-02-2024(online)].pdf 2024-02-26
26 201841005827-STATEMENT OF UNDERTAKING (FORM 3) [15-02-2018(online)].pdf 2018-02-15
26 201841005827-ASSIGNMENT WITH VERIFIED COPY [26-02-2024(online)].pdf 2024-02-26

Search Strategy

1 2021-03-2514-59-47E_25-03-2021.pdf

ERegister / Renewals

3rd: 26 Mar 2024

From 15/02/2020 - To 15/02/2021

4th: 26 Mar 2024

From 15/02/2021 - To 15/02/2022

5th: 26 Mar 2024

From 15/02/2022 - To 15/02/2023

6th: 26 Mar 2024

From 15/02/2023 - To 15/02/2024

7th: 26 Mar 2024

From 15/02/2024 - To 15/02/2025