Sign In to Follow Application
View All Documents & Correspondence

A Real Time Predictive Tool For An Attribute Association

Abstract: ABSTRACT A REAL TIME PREDICTIVE TOOL FOR AN ATTRIBUTE ASSOCIATION The invention provides a real-time predictive tool for improving viewer engagement in video content. The tool aids in creating engaging and interactive video for a video creator. The tool includes a data capture module (101) configured to collect engagement-related data from users interacting with the video content. The collected data is processed and analysed by a processor (103) to generate a nuanced data. The nuanced data is stored in a data lake (105). A predictive engine (107) suggests inclusion of a plurality of interactive elements in a video based on the evaluation of business outcomes of the interactive elements, and recommends adjustment to fonts, positioning, and color themes of the interactive elements based on an engagement metrics data obtained from the data lake (105). The predictive engine (107) is coupled to a user interface (111) configured to render the video with the recommended interactive elements embedded within the video content. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
31 July 2024
Publication Number
37/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

KPOINT TECHNOLOGIES PRIVATE LIMITED
201, 2nd Floor, S. R. Iriz, Baner - Pashan Link Road, Pashan, Pune- 411021, Maharashtra, India

Inventors

1. Pranav Nabar
Kpoint Technologies Private Limited 201, 2nd Floor, S. R. Iriz, Baner - Pashan Link Road, Pashan, Pune- 411021, Maharashtra, India
2. Pushyamitra Navare
Kpoint Technologies Private Limited 201, 2nd Floor, S. R. Iriz, Baner - Pashan Link Road, Pashan, Pune- 411021, Maharashtra, India
3. Rashmi Chaudhari
Kpoint Technologies Private Limited 201, 2nd Floor, S. R. Iriz, Baner - Pashan Link Road, Pashan, Pune- 411021, Maharashtra, India
4. Mohd Shahid Khan
Kpoint Technologies Private Limited 201, 2nd Floor, S. R. Iriz, Baner - Pashan Link Road, Pashan, Pune- 411021, Maharashtra, India
5. Amol Potnis
Kpoint Technologies Private Limited 201, 2nd Floor, S. R. Iriz, Baner - Pashan Link Road, Pashan, Pune- 411021, Maharashtra, India

Specification

DESC:A REAL TIME PREDICTIVE TOOL FOR AN ATTRIBUTE ASSOCIATION
FIELD OF THE INVENTION
The invention generally relates to the field of video editing and particularly to a real time predictive tool for an attribute association to improve viewer engagement.
BACKGROUND OF THE INVENTION
Viewer engagement is one of the key aspects in the success of any video marketing. A well-executed video can capture the viewers’ attention, thereby increasing the number of views and viewer retention rate. Viewer engagement of a video is based on the quality and allurement of the video content.
The video creators are continuously striving to make their video gain maximum popularity by improving the quality of the video based on the recent trends and by putting eye-catching thumbnails and titles. The existing technologies give information about the behavioral patterns of the viewer. However, they do not provide information that helps to predict the video content pattern which ensures maximum viewership and viewer engagement.
There are systems known in the art that provide ways for improving user/viewer engagements. One of the systems known in the art provides a method for integrating interactive call-to-action, contextual applications with videos. The method suggests the 'call-to-actions' to be added in a video at a given time. However, the method has disadvantages in terms of identification of suitable attributes for the call-to-actions that is intended to be added by the content creator. US9588663B2 discloses a system and method of delivering an interactive video application. The method includes identifying a hotspot in a portion of a video content and suggesting the call-to-action options to be added in a video at a given time.
Another system existing in the art provides a media for presenting interactive elements within video content. US20240040211A1 discloses methods, systems and media for presenting interactive elements within video content. The system generates interactive video content in between the video frames.
Even though there are various interactive systems in the art which generates video content in real time and include it in between the existing video content, none of the existing systems suggest the user to add a better temporal attribute, positional attribute, and/or design attribute for the action element added by the user.
Thus, there is a need in the art for a real time predictive tool for the association of an attribute, which provides the video creators with analytical information which helps them to design, curate and publish their video in a smarter and optimized way to gain maximum viewership and viewer engagement.
SUMMARY
The invention relates to a real-time predictive tool for association of attribute to improve viewer engagement in a video content. The predictive tool includes a data capture module configured to collect engagement-related data from users interacting with the video content. The collected data is processed and analysed by a processor to generate a nuanced data. The nuanced data is stored in a data lake. The nuanced data relates to information on user behaviour, engagement metrics etc. The data capture module, the processer and the data lake are coupled to a predictive engine. The predictive engine includes a machine learning module, a video analytics module and an interactive engagement module. The predictive engine suggests inclusion of a plurality of interactive elements in a video by means of the machine learning module, and the video analytical module evaluates the business outcomes of each of the interactive elements and a further recommendation for adjustment to fonts, duration, size of the elements, color theme, grid placements of one or more interactive widgets and / or one or more interactive elements are provided by an interactive engagement module based on the engagement metrics data retrieved from the data lake. Predictive engine is coupled to a user interface configured to render the video with the recommended interactive elements embedded within the video content.
BRIEF DESCRIPTION OF DRAWINGS
So that the manner in which the recited features of the invention can be understood in detail, some of the embodiments are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
Fig. 1 shows a high-level architecture of a predictive tool for association of an attribute to a video content, according to an embodiment of the invention.
Fig. 2 illustrates the suggestion provided by the real time predictive tool based on historical engagement analysis, according to an example of the invention.
DETAILED DESCRIPTION OF THE INVENTION
The definitions, terms and terminology adopted in the disclosure have their usual meaning and interpretations, unless otherwise specified.
The terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, the use of the terms a, an, etc. does not denote a limitation of quantity, but rather denotes the presence of at least one of the referenced items. It will be further understood that the terms “comprises” and/or “comprising,” or “includes” and/or “including” when used in this specification, specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, regions, integers, steps, operations, elements, components, and/or groups thereof. It will be further understood that for the purposes of this disclosure, “at least one of” will be interpreted to mean any combination of the enumerated elements following the respective language, including combination of multiples of the enumerated elements.
Unless otherwise defined, all terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Throughout the drawings and the detailed description, unless otherwise described, the same drawing reference numerals are understood to refer to the same elements, features, and structures. The relative size and depiction of these elements may be exaggerated for clarity, illustration, and convenience.
The terms “user” and “viewer” are used interchangeably throughout the description of the invention.
Various embodiments of the invention provide a real-time predictive tool for association of an attribute to a video content using interactive machine learning in order to improve the viewer engagement. The predictive tool is configured for the association of an attribute to a video content. The attributes described herein include but are not limited to a temporal attribute, a positional attribute, and/or design attribute. The attributes are added/ associated to a plurality of action elements after the action elements are inserted into the video by the content creator. The action elements include interactive elements. Examples of action elements include selecting options from a list, redirecting to a website and filling in a form inside a video. The redirected websites can be related to polls, quizzes, maps, sales or offers related to the contents in the video, purchasing a good or service, payment gateway, etc. In one embodiment of the invention, the attributes are added to the action elements in real-time. In one example of the invention, the attributes are added at the time of creation of the video based on the suggestions received from the predictive tool. In another example of the invention, the attributes are added at the time of live streaming of the video based on the content of the video, user engagement and the like parameters.
According to one embodiment of the invention, a real-time predictive tool for attribute association to improve viewer engagement in a video content. The real-time predictive tool includes a data capture module configured to collect engagement-related data from users interacting with the video content. The collected data is processed and analysed by a processor to generate a nuanced data. The nuanced data is stored in a data lake. The data capture module, the processer and the data lake are coupled to a predictive engine. The predictive engine includes a machine learning module, a video analytics module and an interactive engagement module. The predictive engine suggests inclusion of a plurality of interactive elements in a video based on the evaluation of business outcomes of each of the interactive elements, and recommends adjustment to fonts, duration, size of the elements, color theme, grid placements of one or more interactive widgets and / or one or more interactive elements based on an engagement metrics data retrieved from the data lake. The predictive engine is coupled to a user interface configured to render the video with the recommended interactive elements embedded within the video content. The tool further based on user engagement data, may make changes to the interactive elements and generate real-time video content that may improve the viewership.
Fig. 1 shows a high-level architecture of a predictive tool for association of an attribute to a video content, according to an embodiment of the invention.
The predictive tool 100 includes a data capture module 101. The data capture module 101 captures the metadata related to the engagement of the viewer with the video content. Examples of the metadata include geolocation of the viewer, device details, screen size, and viewership details. In one embodiment of the invention, the data capture module 101 captures the passive interaction details of the viewer which helps a video creator to design, curate and publish their content in a smarter and optimized way. An Application Programming Interface (API) in the data capture module 101 tracks the viewer interaction with the controls, player and the widgets inside the video.
The data captured by the data capture module 101 is then analysed by a processor 103. In one embodiment of the invention, an Extract, Transform, Load (ETL) pipeline extracts the normalised data from unstructured data, clean the data and find the anomalies within the heap of data. The ETL pipeline acts as a crucial catalyst to the processor 103. The data is then post-processed to generate more nuanced information or data. The nuanced data is then stored in a data lake 105. The data lake 105 powers the analytics, reporting and data science requirements.
Based on the information received from the data lake, a predictive engine 107 recommends the fonts, duration, size of the elements, grid placements and interactive widgets rendered in a video. In one of the embodiments of the invention, the interactive widgets are call-to-action buttons. The predictive engine 107 includes a Machine Learning module 107a that provides suggestions to the video creators based on the latest trends and metrics. The Machine Learning module 107a is regularly retrained and deployed with updated weights enabling it to provide suggestions according to latest trends and metrics. The predictive engine 107 also includes a video analytics module 107b that provides analytical data such as insights to the business outcomes of the video edited using the predictive tool. The video analytics module 107b also provides reinforcement to a machine learning (ML) module 107a and trains it to make decisions to achieve the most optimal results. The machine learning (ML) module 107a provides predictions based on the analytical data in an interactive engagement module 107c. The interactive engagement module 107c suggests the video creator better fonts, right position in the video frame, right time in the video and good color themes based on the analytical data for enabling better click rate and impressions.

Upon insertion of the suggestions provided by the predictive engine 107, the video is presented to a user 111 through a user interface 109. The user interface 109 can be accessed through an End User Device, hereinafter referred to as EUD. The EUD is at least one of the devices including but not limited to a personal computer, a laptop, a tablet, a smart-phone, a smart TV and all such devices capable of accessing a user interface. The user 111 sends commands through the user interface 109 to perform a plurality of actions. The actions described herein includes but not limited to selecting an option through call-to-action buttons, clicking a link which connects the user 111 to a website for the further actions or filling a form for registration.
According to another embodiment of the invention, the invention provides a method for providing a real-time predictive tool for association of an attribute to improve viewer engagement in a video content. The method includes capturing a plurality of engagement-related data from user interactions by means of data capture module. The engagement-related data or a metadata is an unstructured data that includes geolocation of the viewer, device details, viewer watch time, click behaviour, pause frequency, and demographic attributes. The said metadata may vary across different age groups, gender and income category to name a few. The unstructured data is normalised by means of an Extract, Transform, Load (ETL) pipeline extract. The normalised data generated is converted into a nuanced data. In one example, the captured data is analysed and processed by means of a processor 103 to obtain a nuanced data or information. The information includes engagement metrics, historical engagement data and user behaviour information. The nuanced data is stored in a data lake 105. The nuanced data is accessed by the predictive engine 107 for providing suggestion on insertion of the plurality of interactive elements by means of a machine learning module 107a. The interactive elements include quizzes, polls, clickable buttons, call-to-action buttons or annotations. Each of the suggested interactive elements are evaluated for business outcome by means of a video analytics module 107b, for a video creator to choose the inclusion of an appropriate interactive element. An interactive engagement module 107c further recommends change such as fonts, duration, size of the elements, grid placements and interactive widgets to be place inside a video based on an engagement metrics data stored in the data lake 105. The video is rendered by including the suggested interactive elements chosen by the video creator and displayed through the user interface. The user interface 109 supports real-time feedback based on viewer interaction or a user 111 which is again captured, processed and stored as the nuanced data. The nuanced data from the data lake 105 is assessed by the machine learning model 107a for further refinement and updation of the predictive tool. Thus, the said predictive tool aids a video creator to generate engaging videos with interactive elements that enables maximum click rate in real-time.
EXAMPLE 1
A video creator is editing a video. The real time predictive tool for an attribute association suggests the most interesting call-to-action buttons depending on the content of the video. The real time predictive tool for an attribute association provides visual and descriptive suggestions to add a call-to-action button on the video frame. The real time predictive tool for an attribute association provides an analysis to justify the addition of interactions on the video to increase the engagement of the viewers. The analysis provides justification for adding a particular call-to-action button to the video for increasing viewer engagement. Further recommends the type of call-to-action buttons to be added according to the nature of the content of the video, at which part of the video the call-to-action button to be placed to get maximum click rates, which colour to be attributed to the call-to-action buttons to get stand out recognition for the brand. Thus, the video creator gets support with evidence and justification to add interactive call-to-action buttons in the video to get maximum viewership and viewer engagement.
EXAMPLE 2
A live video is playing and there is a sudden rise in viewership because of the content of the video. The real time predictive tool for an attribute association gets triggered and real-time interactivity is added onto the video in the form of call-to-action buttons, for viewers to get engaged. The interactivity depends upon the nature of live video. The call-to-action buttons can be ‘BUY NOW’ button for shopping, ‘RENEW’ button for insurance use case or ‘EMAIL’ capture for marketing videos. The call-to-action button gets triggered at the right moment to capture the viewer’s attention.
EXAMPLE 3
A video creator is adding interactivity to the video with a call-to-action button. Based on the analytics collected in the analytics module for the other videos, the real time predictive tool suggests the creator to move the call-to-action button to the bottom right corner, as the model has concluded that the bottom right call-to-action buttons on other videos have more clicks rate as illustrated in Fig. 2. Thus, the predictive tool recommends positioning of the call-to-action buttons for increasing click rates.

ADVANTAGES
The invention provides a real time predictive tool for association of an attribute to a video content using interactive machine learning. The tool provides suggestions to the video creator on fonts, duration, size of the elements, grid placements of interactive widgets inside a video to get maximum viewership and viewer engagement. The recommendation provided by the tool aids the video creator to curate the video according to the analytical data for improved viewer engagement. The Click Through Rate (CTR) increases certainly because of the data driven interactivity, thereby providing a tool for easier A/B testing of popular suggestions. The A/B testing tools typically compare two versions of a web page or app based on simple changes to drive higher outcomes. The invention provides a better alternative to A/B testing where specific options can be recommended based on historical data without explicit need for running all the variations.
The foregoing description of the invention has been set for merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to person skilled in the art, the invention should be construed to include everything within the scope of the appended claims and equivalents thereof.
,CLAIMS:WE CLAIM:
1. A method for providing a real-time predictive tool for association of an attribute to improve viewer engagement in a video content, the tool comprising:
capturing, through a data capture module (101), a plurality of engagement-related data from users interactions;
analysing, through a processor (103), the captured data to process and generate a nuanced data;
storing the nuanced data in a data lake (105);
accessing, through a predictive engine (107), the stored data from the data lake (105) for suggestion and inclusion of a plurality of interactive elements in a video, wherein the predictive engine is configured for:
suggesting, through a machine learning module (107a), insertion of the plurality of interactive elements in the video;
evaluating the interactive elements through a video analytics module (107b) for business outcomes of each of the interactive element; and
recommending, through an interactive engagement module (107c), adjustments to fonts, duration, size of the elements, grid placements and interactive widgets to place inside a video based on an engagement metrics data stored in the data lake 105; and
rendering the video with the suggested interactive elements embedded within the video content for displaying through a user interface (109).
wherein the interaction of a user (111) in respect of the interactive elements by means of the user interface further triggers real-time updation of the machine learning module (107a) and generation of video content having better engagement capability.
2. The method as claimed in claim 1, wherein the engagement-related data or a metadata is an unstructured data that includes geolocation of the viewer, device details, viewer watch time, click behaviour, pause frequency, and demographic attributes.
3. The method as claimed in claim 1, wherein the unstructured data is normalised by means of an Extract, Transform, Load (ETL) pipeline extract.
4. The method as claimed in claim 1, wherein the processor (103) analyses the normalised data to generate the nuanced data.
5. The method as claimed in claim 1, wherein the predictive engine (107) utilizes historical engagement data from the data lake (105) to update and refine the machine learning model (107a).
6. The method as claimed in claim 1, wherein the interactive elements comprise at least one of quizzes, polls, clickable buttons, call to action button or annotations.
7. The method as claimed in claim 1, wherein the user interface (109) supports real-time feedback display based on viewer interaction.
8. The method as claimed in claim 1, wherein the said tool aids a video creator in generating engaging videos with interactive elements that enables maximum click rate.



Bangalore SUSHMA K C
31-07-2025 (IN/PA/2226)
INTELLOCOPIA CONSULTING LLP
AGENT FOR APPLICANT

Documents

Application Documents

# Name Date
1 202421058235-PROVISIONAL SPECIFICATION [31-07-2024(online)].pdf 2024-07-31
2 202421058235-FORM FOR SMALL ENTITY(FORM-28) [31-07-2024(online)].pdf 2024-07-31
3 202421058235-FORM FOR SMALL ENTITY [31-07-2024(online)].pdf 2024-07-31
4 202421058235-FORM 1 [31-07-2024(online)].pdf 2024-07-31
5 202421058235-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [31-07-2024(online)].pdf 2024-07-31
6 202421058235-EVIDENCE FOR REGISTRATION UNDER SSI [31-07-2024(online)].pdf 2024-07-31
7 202421058235-DRAWINGS [31-07-2024(online)].pdf 2024-07-31
8 202421058235-DECLARATION OF INVENTORSHIP (FORM 5) [31-07-2024(online)].pdf 2024-07-31
9 202421058235-Proof of Right [07-08-2024(online)].pdf 2024-08-07
10 202421058235-FORM-5 [07-08-2024(online)].pdf 2024-08-07
11 202421058235-FORM-26 [07-08-2024(online)].pdf 2024-08-07
12 202421058235-POA [31-07-2025(online)].pdf 2025-07-31
13 202421058235-FORM 13 [31-07-2025(online)].pdf 2025-07-31
14 202421058235-DRAWING [31-07-2025(online)].pdf 2025-07-31
15 202421058235-CORRESPONDENCE-OTHERS [31-07-2025(online)].pdf 2025-07-31
16 202421058235-COMPLETE SPECIFICATION [31-07-2025(online)].pdf 2025-07-31
17 202421058235-AMENDED DOCUMENTS [31-07-2025(online)].pdf 2025-07-31
18 202421058235-MSME CERTIFICATE [25-08-2025(online)].pdf 2025-08-25
19 202421058235-FORM28 [25-08-2025(online)].pdf 2025-08-25
20 202421058235-FORM-9 [25-08-2025(online)].pdf 2025-08-25
21 202421058235-FORM 18A [25-08-2025(online)].pdf 2025-08-25
22 Abstract.jpg 2025-09-04