Abstract: Disclosed herein a system for predicting output response of a future event, comprising one or more processors (M2 and M4), and a computer readable storage medium (M3) communicatively coupled with the processor (M2 and M4) and pre-stored with information related to performance of past events based on plurality of input parameters, wherein said processors (M4) in communication with the computer readable storage medium is configured to receive the plurality of input parameters for the future event as provided by a user and categorise the same into a plurality of supporting format, determine a similarity level between the input parameters of past and future event by comparing their corresponding parametric values, and evaluate the output response based on the similarity level between input parameters of past and future events. The disclosed system and method provide accurate prediction for output reaction and public footfalls at an event in the future.
DESC:A System and Method for predicting response of a future event
This invention relates to a method and system for predicting output response of a future event. More particularly, the present disclosure relates to system and method for predicting output response of a future event based on the past similar events.
Background of the Invention
It has always been a situation of dilemma for event organizers to estimate or have an understanding of the output response of an event before the commencement of the event. If it becomes possible to accurately estimate information related to footfalls or obtain any other insights defining event success in advance, then it will be easier and extremely productive for the organizers to efficiently distribute or allocate their resources and optimize utilisation of the same. This will help in minimizing the loss to a great extent and utilise the insights for maximizing the profit from the event.
In many daily activities or big events or conferences, it is necessary to accurately estimate the number of event participants for event organizers or these days, it has equally become important to predict the response of the event participants based on the future event. As an effective means to solve the problem of prediction of event attendance and output response, the prediction system needs to mine the context information fully and efficiently use them to provide technical event insights so as to be enabled to take measures for improvements as far as the public engagement and their response are concerned. This helps in avoiding cold start of the event. However, there still exists some challenges in the research for accurately predicting event attendance and output response of the event.
One of the technical reasons behind inaccurate prediction of output response of any future event is lack of consideration of uncontrolled external parameters for which there is no historical data available with the prediction system. In such cases, the existing systems and methods prove to be very ineffective as it misguides the user by predicting wrong output response for the future event.
Marketing communication forms one of the important ways to influence audience response. Usually, the best communications are created to gather the most response which is done subjectively using a sample audience or expert opinion. However, this can lead to biases such as sampling bias or prior knowledge. To solve this issue, there is a requirement to provide feedback on marketing communication about expected feedback based on a ML model trained on historical data.
There are various other solutions that have been provided according to the existing arts, but all these solutions still have challenges because of their limited applications and inefficient functioning. It is, therefore, important to work on the alternative solution to develop a system and method for predicting output response of a future event. There is also a need to provide a system and method which will provide an accurate prediction of footfall of any event and obviates the complexity and challenges of the prior arts.
Summary of the Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter. Nor is this summary intended to be used to limit the claimed subject matter’s scope.
Both the foregoing summary and the following detailed description provide examples and are explanatory only. Accordingly, the foregoing summary and the following detailed description should not be considered to be restrictive. Further, features or variations may be provided in addition to those set forth herein. For example, embodiments may be directed to various feature combinations and sub-combinations described in the detailed description.
It is one of the objectives of the present invention to provide a system and method for predicting output response of a future event more accurately and efficiently.
It is one of the objectives of the present invention to provide a system and method for predicting output response of a future event that utilises already installed CCTV cameras for advanced purposes such as predicting the output response of any campaign based on foot falls in similar campaigns.
It is one of the objectives of the present invention to provide a system and method for predicting output response of a future event which is capable of optimizing the elements of a marketing communication to maximize the response.
It is one of the objectives of the present invention to provide a system and method which is capable of predicting output response such as likes, comments and footfall at a future marketing event in advance.
As a preliminary matter, it will readily be understood by one having ordinary skill in the relevant art that the present disclosure has broad utility and application. As should be understood, any embodiment may incorporate only one or a plurality of the above-disclosed aspects of the disclosure and may further incorporate only one or a plurality of the above-disclosed features. Furthermore, any embodiment discussed and identified as being “preferred” is considered to be part of a best mode contemplated for carrying out the embodiments of the present disclosure. Other embodiments also may be discussed for additional illustrative purposes in providing a full and enabling disclosure. Moreover, many embodiments, such as adaptations, variations, modifications, and equivalent arrangements, will be implicitly disclosed by the embodiments described herein and fall within the scope of the present disclosure.
Accordingly, while embodiments are described herein in detail in relation to one or more embodiments, it is to be understood that this disclosure is illustrative and exemplary of the present disclosure and are made merely for the purposes of providing a full and enabling disclosure. The detailed disclosure herein of one or more embodiments is not intended, nor is to be construed, to limit the scope of patent protection afforded in any claim of a patent issuing here from, which scope is to be defined by the claims and the equivalents thereof. It is not intended that the scope of patent protection be defined by reading into any claim a limitation found herein that does not explicitly appear in the claim itself.
Additionally, it is important to note that each term used herein refers to that which an ordinary artisan would understand such term to mean based on the contextual use of such term herein. To the extent that the meaning of a term used herein—as understood by the ordinary artisan based on the contextual use of such term—differs in any way from any particular dictionary definition of such term, it is intended that the meaning of the term as understood by the ordinary artisan should prevail.
Furthermore, it is important to note that, as used herein, “a” and “an” each generally denotes “at least one,” but does not exclude a plurality unless the contextual use dictates otherwise. When used herein to join a list of items, “or” denotes “at least one of the items,” but does not exclude a plurality of items of the list. Finally, when used herein to join a list of items, “and” denotes “all of the items of the list.”
The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar elements. While many embodiments of the disclosure may be described, modifications, adaptations, and other implementations are possible. For example, substitutions, additions, or modifications may be made to the elements illustrated in the drawings, and the methods described herein may be modified by substituting, reordering, or adding stages to the disclosed methods. Accordingly, the following detailed description does not limit the disclosure. Instead, the proper scope of the disclosure is defined by the appended claims. The present disclosure contains headers. It should be understood that these headers are used as references and are not to be construed as limiting upon the subjected matter disclosed under the header.
In the present invention, there is provided a system and method for predicting output response of a future event organised considering multiple event based and external uncontrolled parameters, wherein each parameter impacts the output response of the event, wherein the developed system correlates the performance based input parametric data of multiple parameters for the future event with the past performance based on parametric data of similar past event, wherein the predicted output response of the future event is obtained based on the historical data collected by the system over a period of time. Further, this same technique can be used for inventory management at vehicle workshops and can help in improving buying or overall financial decisions of the customers/company.
In order to further enhance the accuracy of the output response of the future campaign, the attendance prediction system also considers ignored parameters such as climate, weather, country budget, etc. for providing advanced and more accurate insights.
In accordance with one embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors comprising one or more output response prediction modules configured to receive a plurality of input parameters and process the same for predicting the output response, a computer readable storage medium communicatively coupled with said one or more processors and trained with online and offline event response affecting data from the past events, wherein said one or more output response prediction modules are but not limited to video processing module, audio processing module, image processing module and text processing module, wherein the one or more processors are configured to categorize the input parameters into audio-visual (AV) and non-audio-visual (Non-AV) data and feed the same into the video processing module, audio processing module, image processing module and text processing module according to their data processing configuration/capabilities, wherein said one or more output response prediction modules are configured to extract features from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as input parametric values, wherein said one or more output response prediction modules are configured to convert extracted features into texts followed by converting the texts into vector and normalizing the combination of converted vector and numeric values to derive normalized data further evaluated by way of deep learning algorithm based on the online and offline event response affecting data from the past events stored in the computer readable storage medium, to calculate predicted values corresponding to one or more output parameters indicating output response of the future event.
In accordance with one embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors comprising one or more output response prediction modules configured to receive a plurality of input parameters and process the same for predicting the output response, a computer readable storage medium communicatively coupled with said one or more processors and trained with online and offline event response affecting data from the past events, wherein said one or more output response prediction modules are but not limited to video processing module, audio processing module, image processing module and text processing module, wherein the one or more processors are configured to categorize the input parameters into audio-visual (AV) and non-audio-visual (Non-AV) data and feed the same into the video processing module, audio processing module, image processing module and text processing module according to their data processing configuration/capabilities, wherein said one or more output response prediction modules are configured to extract features from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as input parametric values, wherein said one or more output response prediction modules are configured to convert extracted features into texts followed by converting the texts into vector and normalizing the combination of converted vector and numeric values to derive normalized data further evaluated by way of deep learning algorithm based on the online and offline event response affecting data from the past events stored in the computer readable storage medium, to calculate predicted values corresponding to one or more output parameters indicating output response of the future event, wherein said one or more output parameters indicating output response of the future event are but not limited to likes, comments and footfall.
In accordance with one embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors comprising one or more output response prediction modules configured to receive a plurality of input parameters and process the same for predicting the output response, a computer readable storage medium communicatively coupled with said one or more processors and trained with online and offline event response affecting data from the past events, wherein said one or more output response prediction modules are but not limited to video processing module, audio processing module, image processing module and text processing module, wherein the one or more processors are configured to categorize the input parameters into audio-visual (AV) and non-audio-visual (Non-AV) data and feed the same into the video processing module, audio processing module, image processing module and text processing module according to their data processing configuration/capabilities, wherein said one or more output response prediction modules are configured to extract features from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as input parametric values, wherein said one or more output response prediction modules are configured to convert extracted features into texts followed by converting the texts into vector and normalizing the combination of converted vector and numeric values to derive normalized data further evaluated by way of deep learning algorithm based on the online and offline event response affecting data from the past events stored in the computer readable storage medium, to calculate predicted values corresponding to one or more output parameters indicating output response of the future event, wherein said input parameters are but not limited to audio, video, tags, hashtags, current followers, banner image, banner size, location, distance, area and traffic corresponding to the future event.
In accordance with one embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors comprising one or more output response prediction modules configured to receive a plurality of input parameters and process the same for predicting the output response, a computer readable storage medium communicatively coupled with said one or more processors and trained with online and offline event response affecting data from the past events, wherein said one or more output response prediction modules are but not limited to video processing module, audio processing module, image processing module and text processing module, wherein the one or more processors are configured to categorize the input parameters into audio-visual (AV) and non-audio-visual (Non-AV) data and feed the same into the video processing module, audio processing module, image processing module and text processing module according to their data processing configuration/capabilities, wherein said one or more output response prediction modules are configured to extract features from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as input parametric values, wherein said one or more output response prediction modules are configured to convert extracted features into texts followed by converting the texts into vector and normalizing the combination of converted vector and numeric values to derive normalized data further evaluated by way of deep learning algorithm based on the online and offline event response affecting data from the past events stored in the computer readable storage medium, to calculate predicted values corresponding to one or more output parameters indicating output response of the future event, wherein said output response prediction module comprises of a footfall module configured to predict event footfall based on non-AV data such as banner image, banner size, location, distance, area and traffic corresponding to the future event.
In accordance with one embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors comprising one or more output response prediction modules configured to receive a plurality of input parameters and process the same for predicting the output response, a computer readable storage medium communicatively coupled with said one or more processors and trained with online and offline event response affecting data from the past events, wherein said one or more output response prediction modules are but not limited to video processing module, audio processing module, image processing module and text processing module, wherein the one or more processors are configured to categorize the input parameters into audio-visual (AV) and non-audio-visual (Non-AV) data and feed the same into the video processing module, audio processing module, image processing module and text processing module according to their data processing configuration/capabilities, wherein said one or more output response prediction modules are configured to extract features from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as input parametric values, wherein said one or more output response prediction modules are configured to convert extracted features into texts followed by converting the texts into vector and normalizing the combination of converted vector and numeric values to derive normalized data further evaluated by way of deep learning algorithm based on the online and offline event response affecting data from the past events stored in the computer readable storage medium, to calculate predicted values corresponding to one or more output parameters indicating output response of the future event, wherein said video processing module is fed with videos and configured to convert the videos into frames(images) and feed the converted frames into the image processing module.
In accordance with one embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors comprising one or more output response prediction modules configured to receive a plurality of input parameters and process the same for predicting the output response, a computer readable storage medium communicatively coupled with said one or more processors and trained with online and offline event response affecting data from the past events, wherein said one or more output response prediction modules are but not limited to video processing module, audio processing module, image processing module and text processing module, wherein the one or more processors are configured to categorize the input parameters into audio-visual (AV) and non-audio-visual (Non-AV) data and feed the same into the video processing module, audio processing module, image processing module and text processing module according to their data processing configuration/capabilities, wherein said one or more output response prediction modules are configured to extract features from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as input parametric values, wherein said one or more output response prediction modules are configured to convert extracted features into texts followed by converting the texts into vector and normalizing the combination of converted vector and numeric values to derive normalized data further evaluated by way of deep learning algorithm based on the online and offline event response affecting data from the past events stored in the computer readable storage medium, to calculate predicted values corresponding to one or more output parameters indicating output response of the future event, wherein said audio processing module is fed with audios and configured to convert the audios into text and feed the converted text into the text processing module.
In accordance with one embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors comprising one or more output response prediction modules configured to receive a plurality of input parameters and process the same for predicting the output response, a computer readable storage medium communicatively coupled with said one or more processors and trained with online and offline event response affecting data from the past events, wherein said one or more output response prediction modules are but not limited to video processing module, audio processing module, image processing module and text processing module, wherein the one or more processors are configured to categorize the input parameters into audio-visual (AV) and non-audio-visual (Non-AV) data and feed the same into the video processing module, audio processing module, image processing module and text processing module according to their data processing configuration/capabilities, wherein said one or more output response prediction modules are configured to extract features from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as input parametric values, wherein said one or more output response prediction modules are configured to convert extracted features into texts followed by converting the texts into vector and normalizing the combination of converted vector and numeric values to derive normalized data further evaluated by way of deep learning algorithm based on the online and offline event response affecting data from the past events stored in the computer readable storage medium, to calculate predicted values corresponding to one or more output parameters indicating output response of the future event, wherein said image processing module is fed with the converted images from the video processing module and directly as user input, and the non-AV data, to predict likes and comments based on the images, for the future event.
In accordance with one embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors comprising one or more output response prediction modules configured to receive a plurality of input parameters and process the same for predicting the output response, a computer readable storage medium communicatively coupled with said one or more processors and trained with online and offline event response affecting data from the past events, wherein said one or more output response prediction modules are but not limited to video processing module, audio processing module, image processing module and text processing module, wherein the one or more processors are configured to categorize the input parameters into audio-visual (AV) and non-audio-visual (Non-AV) data and feed the same into the video processing module, audio processing module, image processing module and text processing module according to their data processing configuration/capabilities, wherein said one or more output response prediction modules are configured to extract features from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as input parametric values, wherein said one or more output response prediction modules are configured to convert extracted features into texts followed by converting the texts into vector and normalizing the combination of converted vector and numeric values to derive normalized data further evaluated by way of deep learning algorithm based on the online and offline event response affecting data from the past events stored in the computer readable storage medium, to calculate predicted values corresponding to one or more output parameters indicating output response of the future event, wherein said text processing module output is fed with the non-AV data as the converted text from the audio processing module and/or directly as user input, to predict likes and comments based on the texts, for the future event.
In accordance with one embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors comprising one or more output response prediction modules configured to receive a plurality of input parameters and process the same for predicting the output response, a computer readable storage medium communicatively coupled with said one or more processors and trained with online and offline event response affecting data from the past events, wherein said one or more output response prediction modules are but not limited to video processing module, audio processing module, image processing module and text processing module, wherein the one or more processors are configured to categorize the input parameters into audio-visual (AV) and non-audio-visual (Non-AV) data and feed the same into the video processing module, audio processing module, image processing module and text processing module according to their data processing configuration/capabilities, wherein said one or more output response prediction modules are configured to extract features from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as input parametric values, wherein said one or more output response prediction modules are configured to convert extracted features into texts followed by converting the texts into vector and normalizing the combination of converted vector and numeric values to derive normalized data further evaluated by way of deep learning algorithm based on the online and offline event response affecting data from the past events stored in the computer readable storage medium, to calculate predicted values corresponding to one or more output parameters indicating output response of the future event, wherein said processor is configured to determine average predicted likes and comments for the future events based on the input images processed through the image processing module and input texts processed through the text processing module.
In accordance with any of the above embodiments of the present invention, wherein said future event is advertisement-based marketing event.
In accordance with another embodiment of the present invention, there is provided a method for predicting output response of a future event, comprising receiving, employing one or more processor, a plurality of input parameters for categorizing them into audio-visual (AV) and non-audio-visual (Non-AV) data and converting them into their corresponding input parametric values, feeding the categorized audio-visual (AV) and non-audio-visual (Non-AV) data into one or more output response prediction modules such as a video processing module, an audio processing module, an image processing module and a text processing module according to their data processing configuration/capabilities, extracting features, employing the output response prediction modules, from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as the input parametric values, converting, employing the output response prediction modules, the extracted features into texts followed by converting them into vector and normalizing the combination of converted vector and numeric values for deriving a normalized data, evaluating by way of a deep learning algorithm based on the online and offline event response affecting data from the past events stored in a computer readable storage medium, to calculate predicted values corresponding to one or more output parameters indicating output response of the future event.
In accordance with another embodiment of the present invention, there is provided a method for predicting output response of a future event, comprising receiving, employing one or more processor, a plurality of input parameters for categorizing them into audio-visual (AV) and non-audio-visual (Non-AV) data and converting them into their corresponding input parametric values, feeding the categorized audio-visual (AV) and non-audio-visual (Non-AV) data into one or more output response prediction modules such as a video processing module, an audio processing module, an image processing module and a text processing module according to their data processing configuration/capabilities, extracting features, employing the output response prediction modules, from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as the input parametric values, converting, employing the output response prediction modules, the extracted features into texts followed by converting them into vector and normalizing the combination of converted vector and numeric values for deriving a normalized data, evaluating by way of a deep learning algorithm based on the online and offline event response affecting data from the past events stored in a computer readable storage medium, to calculate predicted values corresponding to one or more output parameters indicating output response of the future event, wherein said method further comprises of predicting future event footfall, employing a footfall module executing deep learning algorithm and in communication with the computer readable storage medium, based on non-AV data such as banner image, banner size, location, distance, area and traffic corresponding to the future event.
In accordance with another embodiment of the present invention, there is provided a method for predicting output response of a future event, comprising receiving, employing one or more processor, a plurality of input parameters for categorizing them into audio-visual (AV) and non-audio-visual (Non-AV) data and converting them into their corresponding input parametric values, feeding the categorized audio-visual (AV) and non-audio-visual (Non-AV) data into one or more output response prediction modules such as a video processing module, an audio processing module, an image processing module and a text processing module according to their data processing configuration/capabilities, extracting features, employing the output response prediction modules, from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as the input parametric values, converting, employing the output response prediction modules, the extracted features into texts followed by converting them into vector and normalizing the combination of converted vector and numeric values for deriving a normalized data, evaluating by way of a deep learning algorithm based on the online and offline event response affecting data from the past events stored in a computer readable storage medium, to calculate predicted values corresponding to one or more output parameters indicating output response of the future event, wherein said method further comprises of determining average predicted likes and comments for the future events based on the input images processed through the image processing module and input texts processed through the text processing module executing deep learning algorithm.
Brief Description of the Drawings
Figure 1 illustrates an exemplary embodiment of the present invention for data preparation for image for prediction of output response based on the prepared image data.
Figure 2 illustrates the working of audio processing module.
Figure 3 illustrates the working of video processing module
Figure 4 illustrates processing of non-AV data for prediction of output response of a future event.
Figure 5 illustrates the working of footfall module.
Figure 6 illustrates another exemplary embodiment of the present invention for data preparation using image and non-AV data for the prediction of likes and comments.
Figure 7 illustrates processing of non-AV data and numeric data for prediction of output response of a future event.
Figure 8 illustrates working of a processor for prediction of output response of a future event.
Figure 9 illustrates a flowchart that explains functioning of prediction module of the system.
Figure 10 illustrates storage medium module trained with online and offline data.
Figure 11 illustrates overall layout of the system for predicting output response of a future event.
Detailed Description of the Invention
In accordance with one embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors, and a computer readable storage medium communicatively coupled with the processor and pre-stored with information related to performance of past events based on plurality of input parameters, wherein said processor in communication with the computer readable storage medium is configured to receive the plurality of input parameters for the future event as provided by a user and categorise the same into a plurality of supporting format, determine a similarity level between the input parameters of past and future event by comparing their corresponding parametric values, and evaluate the output response based on the similarity level between input parameters of past and future events.
In accordance with another embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors, and a computer readable storage medium communicatively coupled with the processor and pre-stored with information related to performance of past events based on plurality of input parameters, wherein said processor in communication with the computer readable storage medium is configured to receive the plurality of input parameters for the future event as provided by a user and categorise the same into a plurality of supporting format, determine a similarity level between the input parameters of past and future event by comparing their corresponding parametric values, and evaluate the output response based on the similarity level between input parameters of past and future events, wherein said input parameters are event-based marketing communication attributes whose parametric values defines event performance to be expected.
In accordance with one embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors, and a computer readable storage medium communicatively coupled with the processor and pre-stored with information related to performance of past events based on plurality of input parameters, wherein said processor in communication with the computer readable storage medium is configured to receive the plurality of input parameters for the future event as provided by a user and categorise the same into a plurality of supporting format, determine a similarity level between the input parameters of past and future event by comparing their corresponding parametric values, and evaluate the output response based on the similarity level between input parameters of past and future events, wherein said input parameters are external factors including a list of conflicting events, weather on the event day, variation in product’s demand, etc.
In accordance with one embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors, and a computer readable storage medium communicatively coupled with the processor and pre-stored with information related to performance of past events based on plurality of input parameters, wherein said processor in communication with the computer readable storage medium is configured to receive the plurality of input parameters for the future event as provided by a user and categorise the same into a plurality of supporting format, determine a similarity level between the input parameters of past and future event by comparing their corresponding parametric values, and evaluate the output response based on the similarity level between input parameters of past and future events, wherein said output response comprises of predicted attendance at the future event based on the determined similarity level in terms of people attending the past events.
In accordance with one embodiment of the present invention, there is provided a system for predicting output response of a future event, comprising one or more processors, and a computer readable storage medium communicatively coupled with the processor and pre-stored with information related to performance of past events based on plurality of input parameters, wherein said processor in communication with the computer readable storage medium is configured to receive the plurality of input parameters for the future event as provided by a user and categorise the same into a plurality of supporting format, determine a similarity level between the input parameters of past and future event by comparing their corresponding parametric values, and evaluate the output response based on the similarity level between input parameters of past and future events, wherein said event-based marketing communication attributes comprise of speaking language compatibility, product reputation in the market, presence of celebrity at the event, occurrence of festivals during the event, Content text (translated to English), Content color histogram, Content dimensions (length x breadth for image, time for audio/video), Audio clip, video, Display latlon (for OOH media), publish times for audio, video, Reply modes mentioned in communication.
In accordance with one of the above embodiments of the present invention, wherein the computer readable storage medium is in-built with historic input and output data based on input parametric values and their corresponding output response.
In accordance with one of the above embodiments of the present invention, wherein said processor comprises of prediction module configured to predict output response of the future event based on the historic input and output data.
In accordance with one of the above embodiments of the present invention, wherein said prediction module is configured to operate on Natural Language Processing, computer vision and random forest.
In accordance with further embodiment, the prediction module takes input parametric values in various formats such as texts, speech, images, videos or numeric data, and is configured to provide output response in the form of number of expected event calls, footfalls and social media response with the help of processing of input parametric values using Natural Language Processing, Computer Vision and Random Forest.
In accordance with one embodiment of the present invention, there is provided a system for predicting response of a marketing communication, wherein the response may be in terms of call back, walk in at experience centre, online visits etc eventually leading to sales and the communication can be in the form of whatsapp image, website banner, outdoor hoarding, radio jingle, TV commercial, YouTube advertisement, etc.
In accordance with another embodiment of the present invention, there is provided a system for predicting response of a marketing communication, wherein the response may be in terms of call back, walk in at experience centre, online visits etc eventually leading to sales and the communication can be in the form of whatsapp image, website banner, outdoor hoarding, radio jingle, TV commercial, YouTube advertisement, etc., wherein historical data stored on the system based on the past events may suggest that a hoarding with vibrant colors gets better response in one region while the same hoarding with same message but mellow colors gets better response in another region.
In accordance with another embodiment of the present invention, there is provided a system for predicting response of a marketing communication, wherein the audience of a particular region shows better response with vernacular communication and a celebrity actor, while another region prefers celebrity associated with social causes, wherein such information is used to guide future communications for getting maximum response in both regions.
In accordance with another embodiment of the present invention, there is provided a system for predicting response of a marketing communication, wherein marketing parameters such as content forms a major factor for determining output response of an event, wherein sub-parameters of the marketing parameters are but not limited to content color histogram - % of red, blue, green colors, content dimensions (length x breadth for image, - length*breadth of whatsapp image, hoarding, website banner time for audio/video), time of the clip or jingle, Audio clip used for radio clip, video used for TV/ online advertisement, Display latlon (for OOH media) - latitude and longitude of the location where hoardings were placed, time of publication for audio, video - time slots when radio jingle, TVC is published, reply modes mentioned in communication (eg. whatsapp number or website QR code provided on the hoarding).
In the Figures, “CONVOLUTIONAL NEURAL NETWORK” is referred to as “YOLO”.
In accordance with an exemplary embodiment, referring to Figure 1, a flow for predicting the response for an image-based marketing communication is explained, wherein the image (7) may be of an advertisement in newspaper, website, social media, roadside hoarding etc, is passed to a CONVOLUTIONAL NEURAL NETWORK model (9) for extracting features, wherein said CONVOLUTIONAL NEURAL NETWORK is a readily available image processing deep learning artificial intelligence model capable of analysing an image using convolutional algorithms and provide output whether the image contains specific features and the location of those features on the image, wherein in Figure 1, two image based features including images of bike and person have been extracted and parallelly, non-audio-visual (Non AV) information (8) is fed to the CONVOLUTIONAL NEURAL NETWORK. In this exemplary embodiment, the image may be published on social media (Twitter or X, Instagram etc) with hashtags and so on, which gets further split into non-numeric data (8.1) and numeric data (8.2), wherein the Non-numeric data is the text content such as hashtags, while numeric data is current follower count for the social media handle, or the number of days for which response prediction is required.
Further, the non-numeric data from 8.1 and the feature data from 9.1 are converted to numeric data by encoding it to represent text as a number. A brief description of processing of nonAV data is provided given in Figure 4.
Once the non-numeric data is converted to numeric data, it is combined with the numeric data in 8.2 and passed to a normalizing module (11), wherein normalizing is typically used in Machine Learning to help the underlying calculations work faster and give a stable solution, wherein simpler forms of normalization are subtracting the smallest number from each number and dividing the result by the difference between largest and smallest number. This enables all number to lie between 0 to 1. After normalisation (11), the data is sent to prediction module (12).
Figure 2 shows functioning of the audio processing module as to how an audio communication is analysed for predicting response, wherein the audio input will have an audio file, and Non AV ( Non Audio-Visual) data comprising of numeric data such as the number of followers of social media handle where the audio will be posted, and textual data such as hashtags, wherein the audio file is analysed through a wav2vec model (4) to transcript the audio content and generate text data (4.1), wherein the wav2vec is a readily available deep learning model which transcribes audio files, wherein the transcription models do not require any re-training on context specific data and can be used as a plug & play model, wherein the text data along with the Non-AV data is sent to text module (8) for further analysis and prediction.
Figure 3 shows functioning of a video processing module which is used to analyse marketing communications in the form of videos. The video may be posted on social media handles or shown as a TV commercial or on OTT platform. The communication will contain AV data (the video) and non-AV data comprising of numerical data such as follower count and non-numerical data such as hashtags. The AV data (5) is sent to CONVOLUTIONAL NEURAL NETWORK model (6) for splitting the video into images. Once the video is converted into images, it is sent to the image processing module for further processing. The images (6.1) and the non-AV data (8) is sent to Image processing section (7) and non-AV processing section (8) respectively of the Image processing module in Figure 6.
Figure 4 explains data preparation for non-AV data, wherein the non-AV (non-audio-visual) data consists of two parts – numeric data and non-numeric data, wherein the numeric data are typically information such as number of followers of social media handle, or dimensions of image, or lat-long of the place where hoarding will be placed. The other format of non-AV data is text such as hashtags used on social media, or contents of the tweet that will be published, or any caption that might be written along with a video that gets published online.
Further, text data gets converted to numeric data by using word embeddings which represents each word by a vector in n-dimensional space. For example, an embedding might use 400 dimensional vector to represent each word, so each word will get converted to a series of 400 numbers. TF-IDF stands for Term Frequency – Inverse Document Frequency and it gives a pair of numbers for each word which denotes the importance of that word in the document. These numbers are used to represent the word as numbers. The vector thus achieved by converting text to number will be merged with the numerical information with each numerical data point being added at the end of TF-IDF vector to give a combined vector containing all the nonAV information. This vector then gets normalized. Normalization is a process of reducing the variation in data to a set limit so the calculation algorithms that use this data will work faster and will not encounter any singularity such as division by zero. Normalization is done through various techniques. A well-known technique to normalize a set of numbers is to reduce the lowest number from each number and divide the result by the difference between lowest and highest number. The resultant set of numbers will all lie between 0 and 1. This normalized data is then sent to the prediction module such as component 12 in Figure 5.
Figure 5 explains functioning of footfall prediction module that predicts footfall at an experience centre (shop/ showroom) based on communication such as hoardings put up in nearby area, wherein hoarding image and other data pertaining to the hoarding such as nearest dealership, traffic density etc are fed into the module to process each data type separately, wherein Image data is sent to CONVOLUTIONAL NEURAL NETWORK model for extracting features as described in Figure 1 and Non AV data is split into numeric, textual, and categorical data. Numeric and Textual data is analysed as per description in Figure 4. An additional data type – categorical is shown in this module. Categorical data is in the form of a selection out of a set. Example of categorical data is traffic density being provided as a subjective measure in terms of high/medium/low. This is neither a text data nor a numeric data. It is handled by converting it in either a scale 1/2/3 or by one-hot encoding using 0/1 for two variables – “is high” and “is low”. High traffic is shown as “is high” =1, “is low” =0, Low traffic is shown as “is high” =0, “is low” =1 while medium traffic will be shown as “is high” =0, “is low” = 0. Once all data types are converted to numeric, it is normalized and sent to prediction module. Description of prediction module is provided in Figure 9.
Figure 6 explains Image processing module which analyses an image-based communication on digital medium and predict response in terms of “likes” and “comments” on the post. This maybe replaced by other metrics of social media such as views, shares, reactions etc. The input data is prepared based on its data type, compiled and fed to the prediction unit, wherein data preparation section function is described in Figure 1 and Figure 4, while the Prediction section is described in Figure 9.
Figure 7 explains text processing module to predict response for any textual content posted on digital medium. The input data is processed, compiled and fed to prediction unit. As all the input data for this case is of the type of non-AV (non-audio-visual), wherein the data preparation part is covered entirely in Image 4 while the prediction unit executing random forest algorithm (12) is explained in Image 9.
Figure 8 shows working of the processor having the various modules including audio, video, text, and image module that work together to give a prediction of the response to any marketing communication for a future event, wherein the response to any event can be digital (likes, comments, shares, views, reactions) or physical (footfalls at an experience centre) while the marketing communication can be an image, a radio jingle (audio), a TV commercial (video) or an hoarding in public space. The user input is taken in two steps – all the attributes of the communication (1) and the detail of variable is to be predicted (2). Based on the input in (1) and (2), the appropriate module is used. For Example: if the input is an image and number of likes are to be predicted – image module is used, while if footfalls are to be predicted – footfall module is used.
Further, the image highlights how modules are reused for various types of input by processing them in a particular way.
There are 3 main processing modules of processor – Image processing module, Text processing module, and footfall prediction module.
Video processing module converts the video to image so it can be analysed by Image processing module (as shown in Figure 3), while Audio processing module converts the audio to text module so it can be analysed by Text processing module (as shown in Figure 2)
All the three main modules (Image, Text, Footfall) have 2 components – Data preparation unit and Prediction unit. Data Preparation converts all the various data types such as image, text, categorical to numeric while Prediction unit predicts the target variable (likes, comments, footfalls etc) by analysing the input using machine learning algorithm such as Random Forest algorithm.
Data Preparation is described in Figure 1, 4, and 5 while Prediction is described in Figure 9
Figure 9 explains functioning of prediction unit that operates on Random Forest Algorithm, wherein the image describes how Prediction unit predicts the target variables by using data received from Data Preparation unit. Random forest is one of the models used in prediction as it supports non-linear relations of the input and target variables, however any other machine learning or deep learning model which gives accurate results maybe used.
Random Forest algorithm is an ensemble technique composed of multiple ‘decision trees’ making predictions based on the input. The final input is taken as average of all trees output (in case of regression when a particular value is to be predicted) or as majority vote (in case of classification such as yes/no).
Each decision tree gives a prediction for the number of likes/comments/footfalls based on the input parameters. An example of decision logic for predicting likes for an image could be :
---IF image size is larger than 100*100pixel, AND IF bike is present, AND IF festival is <2day away THEN likes=50
---IF image size is larger than 100*100pixel, AND IF bike is present, AND IF festival is >2day away THEN likes=10.
The decision point for each variable such as the images size, festival recency are derived by analysing the training data set. The algorithm tries to cut each variable at such decision points to get most accurate granular prediction in each branch.
While training the model, each tree is provided with a random sample from the training set, with a random number of input variables. If the training data is arranged in a table with columns representing the variable names and rows representing individual readings, then each tree is passed a random number of rows and columns. These two randomising activities are done to ensure the model can predict correctly over a vast range.
Process for gathering training data is described in Figure 10.
Figure 10 explains Training Data Extractor Module in computer readable storage medium that generates training data that needs to be fed to the prediction module so that it configures its internal parameters to give an accurate prediction of the expected response of any future event.
There are two types of data streams that get processed – Online and Offline.
For Online data, a web scraper (T2) is used to fetch data from Social media websites. This data is of two types – Input data (T4) such as the image uploaded, or the tweet written, or video uploaded, and the output data (T5) such as the likes, comments, views etc for the social media posts.
Web Scraper is a program which can browse specified websites, scan the data, and download it for use in other activities. These are readily available as open-source libraries. Downloaded data is then analysed based on the data type. Image and other audio-visual data processing (T18) uses the process described in Figure 1, while non-AV data preparation uses process described in Figure 4.
This data along with the output data is fed to prediction model (T7) in training mode so it can build decision trees as described in Figure 9.
Further, offline data requires majority manual data input as it cannot be scanned from any website. As mentioned in Figure 4 and Figure 5, the image data (T10) is converted to text features (T10.2) using CONVOLUTIONAL NEURAL NETWORK model. This text features along with the text data associated with hoarding (T9) is converted to numeric vector data (T16.1) using TF-IDF model. This vector along with other numeric input data (T11) and the encoded categorical data (T8.1) is combined as offline input data (T4) and normalized (T6).
Output data in case of offline fetching, requires analysis of the CCTV footage at requested site in order to analyse the video feed and count number of individuals visiting the site. The CCTV footage (T12) is fed to a CONVOLUTIONAL NEURAL NETWORK with tracker module (T13) which has a tracking mechanism to track unique persons identified in the video feed. This count (T14) is fed as offline output data (T5) to the prediction module (T7) so it can train its parameters for accurate prediction.
Figure 11 explains the overview of the flow of information and the aggregates involved. Historical data of the marketing communications that have already been sent out (M1) are collected using the approach showed in Figure 10 and processed in M2 to generate trained Machine Learning models such as Random Forest. This training process updates the parameters of Machine Learning Model so that it understands the relationship between input (marketing communication) and output (response) for all the historical cases. This trained model is stored in Storage (M3). Whenever user passes new input data (M5), the processors (M4) analyse the input and invoke the correct Machine Learning model from storage (M3) to generate expected response (M6) for the supplied marketing communication.
Furthermore, it is important to note that, as used herein, “a” and “an” each generally denotes “at least one,” but does not exclude a plurality unless the contextual use dictates otherwise. When used herein to join a list of items, “or” , “/” denotes “at least one of the items,” but does not exclude a plurality of items of the list. Finally, when used herein to join a list of items, “and” denotes “all of the items of the list.”
It is an objective of the present invention to provide a system and method for identifying the source code of applications with network attack surfaces which provides comprehensive visibility to the user of their computing environment.
Generally, consistent with embodiments of the disclosure, program modules may include routines, programs, components, data structures, and other types of structures that may perform particular tasks or that may implement particular abstract data types. Moreover, embodiments of the disclosure may be practiced with other computer system configurations, including hand-held devices, general-purpose graphics processor-based systems, multiprocessor systems, microprocessor-based or programmable consumer electronics, application-specific integrated circuit-based electronics, minicomputers, mainframe computers, and the like. Embodiments of the disclosure may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Furthermore, embodiments of the disclosure may be practiced in an electrical circuit comprising discrete electronic elements, packaged or integrated electronic chips containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic elements or microprocessors. Embodiments of the disclosure may also be practiced using other technologies capable of performing logical operations such as, for example, AND, OR, and NOT, including but not limited to mechanical, optical, fluidic, and quantum technologies. In addition, embodiments of the disclosure may be practiced within a general-purpose computer or in any other circuits or systems.
Embodiments of the disclosure, for example, may be implemented as a computer process (method), a computing system, or as an article of manufacture, such as a computer program product or computer-readable media. The computer program product may be a computer storage media readable by a computer system and encoding a computer program of instructions for executing a computer process. The computer program product may also be a propagated signal on a carrier readable by a computing system and encoding a computer program of instructions for executing a computer process. Accordingly, the present disclosure may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.). In other words, embodiments of the present disclosure may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. A computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific computer-readable medium examples (a non-exhaustive list), the computer-readable medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, and a portable compact disc read-only memory (CD-ROM). Note that the computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
Embodiments of the present disclosure, for example, are described above with reference to block diagrams and/or operational illustrations of methods, systems, and computer program products according to embodiments of the disclosure. The functions/acts noted in the blocks may occur out of the order as shown in any flowchart. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
While certain embodiments of the disclosure have been described, other embodiments may exist. Furthermore, although embodiments of the present disclosure have been described as being associated with data stored in memory and other storage mediums, data can also be stored on or read from other types of computer-readable media, such as secondary storage devices, like hard disks, solid-state storage (e.g., USB drive), or a CD-ROM, a carrier wave from the Internet, or other forms of RAM or ROM. Further, the disclosed methods’ stages may be modified in any manner, including by reordering stages and/or inserting or deleting stages, without departing from the disclosure. ,CLAIMS:We claim:
1. A system for predicting output response of a future event, comprising:
one or more processors (M2 and M4) comprising one or more output response prediction modules (A1, A2…A5) configured to receive a plurality of input parameters and process the same for predicting the output response;
a computer readable storage medium (M3) communicatively coupled with said one or more processors (M2 and M4) and trained with online and offline event response affecting data from the past events;
wherein said one or more output response prediction modules (A1, A2…A5) are but not limited to video processing module (A1), audio processing module (A4), image processing module (A2) and text processing module (A3),
wherein the one or more processors (M2 and M4) are configured to categorize the input parameters into audio-visual (AV) and non-audio-visual (Non-AV) data and feed the same into the video processing module (A1), audio processing module (A4), image processing module (A2) and text processing module (A3) according to their data processing configuration/capabilities;
wherein said one or more output response prediction modules (A1, A2…A5) are configured to extract features from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as input parametric values;
wherein said one or more output response prediction modules (A1, A2…A5) are configured to convert extracted features into texts followed by converting the texts into vector and normalizing the combination of converted vector and numeric values to derive normalized data further evaluated by way of deep learning algorithm based on the online and offline event response affecting data from the past events stored in the computer readable storage medium (M3), to calculate predicted values corresponding to one or more output parameters indicating output response of the future event.
2. The system as claimed in claim 1, wherein said one or more output parameters indicating output response of the future event are but not limited to likes, comments and footfall.
3. The system as claimed in claim 1, wherein said input parameters are but not limited to audio, video, tags, hashtags, current followers, banner image, banner size, location, distance, area and traffic corresponding to the future event.
4. The system as claimed in claim 1, wherein said output response prediction module (A1, A2…A5) comprises of a footfall module (A5) configured to predict event footfall based on non-AV data such as banner image, banner size, location, distance, area and traffic corresponding to the future event.
5. The system as claimed in claim 1, wherein said video processing module (A1) is fed with videos and configured to convert the videos into frames(images) and feed the converted frames into the image processing module (A2).
6. The system as claimed in claim 1, wherein said audio processing module (A4) is fed with audios and configured to convert the audios into text and feed the converted text into the text processing module (A3).
7. The system as claimed in claim 1, wherein said image processing module (A2) is fed with the converted images from the video processing module (A1) and directly as user input, and the non-AV data, to predict likes and comments based on the images, for the future event.
8. The system as claimed in claim 1, wherein said text processing module (A3) output is fed with the non-AV data as the converted text from the audio processing module (A4) and/or directly as user input, to predict likes and comments based on the texts, for the future event.
9. The system as claimed in claim 1, wherein said processor (M2 and M4) is configured to determine average predicted likes and comments for the future events based on the input images processed through the image processing module (A2) and input texts processed through the text processing module (A3).
10. The system as claimed in claim 1, wherein said future event is advertisement-based marketing event.
11. A method for predicting output response of a future event, comprising:
receiving, employing one or more processor (M2 and M4), a plurality of input parameters for categorizing them into audio-visual (AV) and non-audio-visual (Non-AV) data and converting them into their corresponding input parametric values;
feeding the categorized audio-visual (AV) and non-audio-visual (Non-AV) data into one or more output response prediction modules (A1, A2,…A5) such as a video processing module (A1), an audio processing module (A4), an image processing module (A2) and a text processing module (A3) according to their data processing configuration/capabilities;
extracting features, employing the output response prediction modules (A1, A2,…A5), from audio-visual (AV) and non-audio-visual (Non-AV) data corresponding to the videos, images, texts and numeric values received as the input parametric values;
converting, employing the output response prediction modules (A1, A2,…A5), the extracted features into texts followed by converting them into vector and normalizing the combination of converted vector and numeric values for deriving a normalized data;
evaluating by way of a deep learning algorithm based on the online and offline event response affecting data from the past events stored in a computer readable storage medium (M3), to calculate predicted values corresponding to one or more output parameters indicating output response of the future event.
12. The method as claimed in claim 11, wherein said method further comprises of predicting future event footfall, employing a footfall module executing deep learning algorithm and in communication with the computer readable storage medium (M3), based on non-AV data such as banner image, banner size, location, distance, area and traffic corresponding to the future event.
13. The method as claimed in claim 11, wherein said method further comprises of determining average predicted likes and comments for the future events based on the input images processed through the image processing module (A2) and input texts processed through the text processing module (A3) executing deep learning algorithm.
| # | Name | Date |
|---|---|---|
| 1 | 202321040754-STATEMENT OF UNDERTAKING (FORM 3) [15-06-2023(online)].pdf | 2023-06-15 |
| 2 | 202321040754-PROVISIONAL SPECIFICATION [15-06-2023(online)].pdf | 2023-06-15 |
| 3 | 202321040754-POWER OF AUTHORITY [15-06-2023(online)].pdf | 2023-06-15 |
| 4 | 202321040754-FORM FOR SMALL ENTITY(FORM-28) [15-06-2023(online)].pdf | 2023-06-15 |
| 5 | 202321040754-FORM FOR SMALL ENTITY [15-06-2023(online)].pdf | 2023-06-15 |
| 6 | 202321040754-FORM 1 [15-06-2023(online)].pdf | 2023-06-15 |
| 7 | 202321040754-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [15-06-2023(online)].pdf | 2023-06-15 |
| 8 | 202321040754-EVIDENCE FOR REGISTRATION UNDER SSI [15-06-2023(online)].pdf | 2023-06-15 |
| 9 | 202321040754-DECLARATION OF INVENTORSHIP (FORM 5) [15-06-2023(online)].pdf | 2023-06-15 |
| 10 | 202321040754-FORM-26 [16-06-2023(online)].pdf | 2023-06-16 |
| 11 | 202321040754-Proof of Right [03-07-2023(online)].pdf | 2023-07-03 |
| 12 | 202321040754-FORM-8 [03-08-2023(online)].pdf | 2023-08-03 |
| 13 | 202321040754-DRAWING [04-06-2024(online)].pdf | 2024-06-04 |
| 14 | 202321040754-COMPLETE SPECIFICATION [04-06-2024(online)].pdf | 2024-06-04 |
| 15 | 202321040754-RELEVANT DOCUMENTS [08-08-2024(online)].pdf | 2024-08-08 |
| 16 | 202321040754-POA [08-08-2024(online)].pdf | 2024-08-08 |
| 17 | 202321040754-FORM-26 [08-08-2024(online)].pdf | 2024-08-08 |
| 18 | 202321040754-FORM 13 [08-08-2024(online)].pdf | 2024-08-08 |
| 19 | 202321040754-PA [26-02-2025(online)].pdf | 2025-02-26 |
| 20 | 202321040754-FORM28 [26-02-2025(online)].pdf | 2025-02-26 |
| 21 | 202321040754-ASSIGNMENT DOCUMENTS [26-02-2025(online)].pdf | 2025-02-26 |
| 22 | 202321040754-8(i)-Substitution-Change Of Applicant - Form 6 [26-02-2025(online)].pdf | 2025-02-26 |