Sign In to Follow Application
View All Documents & Correspondence

Method And System For Automatically Rendering An Advertisement Image Over A Media Content

Abstract: Nowadays, advertisements are placed in the video which may be distracting and at the same time might disturb the aesthetic appeal of the both the video and advertisement. The stylized logo rendering requires specific skill set and it consumes a lot of time and effort. A method and system for automatically rendering an advertisement image over a media content such as a video file has been provided. The method and system analyses the input media content and identifies the regions appropriate for stylized ad placement. Such regions are selected on the basis of multiple factors, viz. amount of motion, context, aesthetics, area available for ad placement etc. Once such regions are selected, candidate ads are determined on the basis of context and aesthetic considerations. These ads are then rendered over the video automatically at the same time maintaining the aesthetic appeal of the media content. * To be published with FIG.1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 December 2018
Publication Number
27/2020
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
kcopatents@khaitanco.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-06-27
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point Mumbai 400021 Maharashtra, India

Inventors

1. PEDANEKAR, Niranjan
Tata Consultancy Services Limited Tata Research Development & Design Centre, 54-B, Hadapsar Industrial Estate, Hadapsar, Pune 411013 Maharashtra, India
2. SIVAPRASAD, Sarath
Tata Consultancy Services Limited Tata Research Development & Design Centre, 54-B, Hadapsar Industrial Estate, Hadapsar, Pune 411013 Maharashtra, India
3. SAXENA, Rohit
Tata Consultancy Services Limited Tata Research Development & Design Centre, 54-B, Hadapsar Industrial Estate, Hadapsar, Pune 411013 Maharashtra, India
4. AGRAWAL, Rishabh
Tata Consultancy Services Limited Tata Research Development & Design Centre, 54-B, Hadapsar Industrial Estate, Hadapsar, Pune 411013 Maharashtra, India

Specification

FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION (See Section 10 and Rule 13)
TITLE OF THE INVENTION
METHOD AND SYSTEM FOR AUTOMATICALLY RENDERING AN ADVERTISEMENT IMAGE OVER A MEDIA CONTENT
APPLICANT
Tata Consultancy Services Limited, A company Incorporated in India under the Companies Act, 1956 of Nirmal Building, 9th floor, Nariman point, Mumbai 400021, Maharashtra, India
PREAMBLE TO THE DESCRIPTION
The following specification particularly describes the invention and the manner in which it is to be performed.

TECHNICAL FIELD
[001] The embodiments herein generally relates to the field of video processing. More particularly, but not specifically, the invention provides a method and system for automatically rendering an advertisement image over a media content.
BACKGROUND
[002] The quantity of videos on the internet is increasing rapidly with the proliferation of digital capture devices and growth of multiple video-sharing sites. In addition, a fast and consistently growing online advertising market has been witnessed in recent years. Motivated by the huge business opportunities and the advantages of the video form of information representation, video advertising, which incorporates advertisements into an online video, has become popular. Video advertisements typically have a greater impact on viewers than traditional online text-based advertisements.
[003] While watching TV or a video through the video sharing sites, viewers often encounter advertisements (ads). These could be logos of channels or programs, static or animated banner ads which appear as an overlay on video that is playing, or in-line video ads that may interrupt the current video. Viewers often find in-line ads distracting and overlay ads relatively acceptable. Even in case of overlay ads, the ads tend to disturb the watching experience. They could too large, too different in color and brightness, or too animated so that they end up distracting the viewer's attention. Though the point of an advertisement is to catch viewer's attention, it need not disturb her aesthetic experience.
[004] In general, an ad should command attention, but not at the cost of the visual aesthetics of the video being watched. Therefore, there is a need for an ad experience that engages attention and yet feels like a part of the visual experience of the entertainment media.
[005] Entertainment production houses often display their logos on

movies, trailers and telecasts. Some production houses, are known to change their logos to suit the style of the entertainment content for promotional purposes. The logos can be changed based on various dimensions such as color, texture, special effect, artwork etc. Such logos are typically a part of an animated sequence. But even as a static image, they communicate about the movie style and content. The stylized logo renderings are created by skilled graphic designers and animators based on the original logo. Typically, the logo creation process consumes time and effort.
[006] For stylizing logos using the color, a representative of movie color style is needed. An entire movie typically employs a wide variety of colors. One can choose dominant colors in a movie from a palette generated by color analysis of the movie. But this takes computational effort in terms of processing the movie frame by frame, or identifying key frames and performing color analysis.
SUMMARY
[007] The following presents a simplified summary of some embodiments of the disclosure in order to provide a basic understanding of the embodiments. This summary is not an extensive overview of the embodiments. It is not intended to identify key/critical elements of the embodiments or to delineate the scope of the embodiments. Its sole purpose is to present some embodiments in a simplified form as a prelude to the more detailed description that is presented below.
[008] In view of the foregoing, an embodiment herein provides a system for rendering an advertisement image over a video file. The system comprises an input module, a user interface, a memory and a processor in communication with the memory. The input module provides the video file of a fixed duration as an input. The user interface selects a time duration in the input for displaying the advertisement image based on a predefined criteria. The processor further comprises a selection module, a first segmentation module, a second segmentation module, a style transfer module, a position determining module and a rendering

module. The selection module selects a set of frames representing the time duration in the input. The first segmentation module segments each of the set of frames into a number of segments using a segmentation algorithm, wherein the number of segments are the source segments having source styles. The second segmentation module segments the advertisement image into a number of advertisement segments. The style transfer module transfers the source style from the source segments to the corresponding advertisement segments using a photorealistic style transfer algorithm to create a series of stylized advertisement images. The position determining module determines an optimal position for the placement of the stylized advertisement images in the entire fixed duration of the video file. The rendering module renders the stylized advertisement image in the style of the video file during the fixed duration, wherein the stylized advertisement image maintains temporal continuity during the fixed duration.
[009] In another aspect the embodiment here provides a method for rendering an advertisement image over a video file. Initially, the video file of a fixed duration is provided as an input. In the next step, a time duration is selected in the input for displaying the advertisement image based on a predefined criteria. Further, a set of frames are selected representing the time duration in the input. In the next step, each of the set of frames are segmented into a number of segments using a segmentation algorithm, wherein the number of segments are the source segments having source styles. Similarly, the advertisement image is segmented into a number of advertisement segments. In the next step, the source style is transferred from the source segments to the corresponding advertisement segments using a photorealistic style transfer algorithm to create a series of stylized advertisement images. In the next step, an optimal position is determined for the placement of the stylized advertisement images in the entire fixed duration of the video file. And finally, the stylized advertisement image is rendered in the style of the video file during the fixed duration, wherein the stylized advertisement image maintains temporal continuity during the fixed duration.
[010] It should be appreciated by those skilled in the art that any block diagram herein represent conceptual views of illustrative systems embodying the

principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computing device or processor, whether or not such computing device or processor is explicitly shown.
BRIEF DESCRIPTION OF THE DRAWINGS
[011] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
[012] Fig. 1 illustrates a block diagram of a system for automatically rendering an advertisement image over a video file according to an embodiment of the present disclosure;
[013] Fig. 2 shows an architectural diagram of the system for automatically rendering an advertisement image over a video file according to an embodiment of the disclosure;
[014] Fig. 3A-3B is a flowchart illustrating the steps involved in rendering an advertisement image over a video file according to an embodiment of the present disclosure; and
[015] Fig. 4 shows graphical representation of variation of representativeness and visual appeal across groups according to an embodiment of the disclosure.
DETAILED DESCRIPTION
[016] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of

disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
[017] The expression “media content” or “input media content” or “entertainment media content” or “video file” or “input poster image” in the context of the present disclosure refers to the input media file on which the stylized advertisement is going to be rendered on to the appropriate place automatically.
[018] Referring now to the drawings, and more particularly to Fig. 1 through Fig. 4, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[019] According to an embodiment of the disclosure, a system 100 for automatically rendering an advertisement (ads) over a media content such as a video file is shown in the block diagram of Fig. 1 and schematic representation of Fig. 2. The system 100 provides an automatic method for placing the advertisement such that the aesthetic experience of the viewer is not compromised while watching the media content. The method is using an algorithms to detect scenes and automatically annotate them with scene descriptions from public domain, so that relevant advertisement can be matched to the content. For example, placing a soft drink ad automatically during a desert scene in a video. In another embodiment, the advertisement could be a series of images or a GIF file.
[020] According to an embodiment of the disclosure, the media content can be any content which the user watches it online or offline. In another example, the media content can also be a poster image over which the stylized advertisements can be rendered without disturbing the aesthetic appeal of the poster image.
[021] The present disclosure analyses the input entertainment media and

identifies the regions appropriate for stylized ad placement. Such regions are selected on the basis of multiple factors, viz. amount of motion, context of the content, aesthetics, area available for ad placement, intensity of emotion. Once such regions are selected, candidate ads are determined on the basis of context and aesthetic considerations.
[022] According to an embodiment of the disclosure, the system 100 further comprises an input module 102, a user interface 104, a memory 106 and a processor 108 as shown in the block diagram of Fig. 1. The processor 108 works in communication with the memory 106. The processor 108 further comprises a plurality of modules. The plurality of modules accesses the set of algorithms stored in the memory 106 to perform certain functions. The processor 108 further comprises a selection module 110, a first segmentation module 112, a second segmentation module 114, a style transfer module 116, a position determining module 118 and a rendering module 120.
[023] According to an embodiment of the disclosure the input module 102 is configured to provide the media content such as the video file as an input as shown in Fig. 1 and Fig. 2. The video file is normally of a fixed duration. The video can be hosted on a video sharing website. In an example, the media content can be an entertainment media content. Further, the user interface 104 is configured to select a time duration in the input for displaying the advertisement image. The advertisement image is displayed based on a predefined criteria. The predefined criteria comprises the duration of the video which has limited motion. The input module 102 and the user interface 104 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite.
[024] According to an embodiment of the disclosure, the processor 108 comprises the selection module 110. The selection module 110 is configured to select a set of frames representing the time duration in the input media content. The input media content is analyzed to identify the regions appropriate for

stylized ad placement. Such regions are selected on the basis of multiple factors. The multiple factors include amount of motion in the fixed duration, context of the input media content, aesthetics, area available for the ads placement, intensity of the emotion etc.
[025] According to an embodiment of the disclosure, the processor 108 also comprises the first segmentation module 112 and the second segmentation module 114. The first segmentation module 112 is configured to segment each of the set of frames into a number of segments using a segmentation algorithm, wherein the number of segments are the source segments having source styles. The second segmentation module 114 is configured to segment the advertisement image into a number of advertisement segments.
[026] According to an embodiment of the disclosure, the processor 108 further comprises the style transfer module 116. The style transfer module 116 is configured to transfer the source style from the source segments to the corresponding advertisement segments using the photorealistic style transfer algorithm to create a series of stylized advertisement images. In the present example, the photo-realistic style transfer network was used for transferring the poster color style to the logo. The formula for the style transfer loss function, L (to be minimized) can be shown in as follows in equation (1):
L = If E…………………………….. (1)
where N is the total number of layers in the convolutional neural network and i
denotes the i th convolutional layer. r is a weight controlling style loss,
ai and# are weights of ith layer and is a weight for photorealism regularization. Lc
and Ls are content and style losses, respectively, while Lm is the photorealism
regularization.
[027] According to an embodiment of the disclosure, the processor 108 further comprises the position determining module 118. The position determining module 118 is configured to determine an optimal position for the placement of the stylized advertisement images in the entire fixed duration of the video file. The stylized advertisement should be placed on the input media content with considerations to various parameters such as aesthetics, proximity to saliency

(likelihood of being noticed) and texture etc.
[028] According to an embodiment of the disclosure, the processor 108 further comprises the rendering module 120. The rendering module 120 is configured to render the stylized advertisement image in the style of the video file during the fixed duration, wherein the stylized advertisement image maintains temporal continuity during the fixed duration.
[029] In operation, a flowchart 300 illustrating a method for rendering the advertisement image over the video file. Initially at step 202, the video file of a fixed duration is provided as the input using the input module 102. At step 204, the time duration is selected in the input for displaying the advertisement image based on the predefined criteria using the user interface 104. The time duration is the duration where the stylized advertisement is rendered on the basis of multiple factors such as amount of motion, context of the content, aesthetics, the area available for ad placement, intensity of emotion etc. At step 206, the set of frames are selected which are representing the time duration in the input.
[030] In the next step 208, each of the selected set of frames are segmented into a number of segments using a segmentation algorithm, wherein the number of segments are the source segments having source styles. The use of any available segmentation algorithm is well within the scope of this disclosure. At step 210, the advertisement image is segmented into the number of advertisement segments. At step 212, the source style from the source segments is transferred to the corresponding advertisement segments using the photorealistic style transfer algorithm to create a series of stylized advertisement images.
[031] In the next step 214, the optimal position is determined for the placement of the stylized advertisement images in the entire fixed duration of the video file. The optimal position is determined using various parameters such as aesthetics, proximity to saliency (likelihood of being noticed) and texture etc. And finally at step 216, the stylized advertisement image is rendered in the style of the video file during the fixed duration, wherein the stylized advertisement image maintains temporal continuity during the fixed duration.
[032] According to an embodiment of the disclosure, the system 100 can

also be explained with the help an example as follows. For the purpose of illustration the style transfer method was performed on a movie poster. A Warner Brothers (WB) logo was placed on the movie poster after stylizing as per the movie poster.
[033] Initially, a binary mask was created for the WB logo with the shield as a foreground and applied the mask to the logo to get its foreground. The foreground was converted to gray-scale. The logo was enhanced by applying the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm. Further the edges were enhanced using an unsharp mask and removed artifacts in the image using Gaussian filtering. Then anti-aliasing and sharpening was applied to the image to get the final logo image to be stylized. Two main segments were formed in the logo, the enhanced shield foreground (represented by a white mask) and the background (represented by a black mask).
[034] In the next step, the poster image was segmented for similar color styles. For this, the image was flattened using mean-shift filtering to remove variation in the color due to lighting and other effects. Then the image was converted from RGB (Red-Green-Blue) to HSV (Hue-Saturation-Value) space. The image then clustered in 5 segments using k-means clustering with HSV value of pixels. This was done in order to get more uniform color content in segments. While clustering, weights of 0.66, 0.17 and 0.17 were applied to H, S and V channels respectively to increase the importance of color. These weights were used as they gave the, best visual reconstruction of the poster image and a reasonable segmentation. Median filter was then used on the segmentation map to remove salt-and-pepper noise. The segments were color coded for future use. A pre-trained CNN model was also used for detecting text in the poster and added it to the blue segment.
[035] In the next step, the best segments were selected from the afore-mentioned 5 segments of the poster to map to the foreground and background segments on the logo. Two strategies were considered for that. First was depending on dark background. It was observed that the majority of WB logos used in movies showed a dark background, and proposed this strategy to enforce a

dark background on the logo. The poster segment was chosen with the least luminance and assigned it to the background segment of the logo. It was also observed that WB logos with more contrast in foreground were more visually appealing. Therefore, from the remaining segments, the top two segments were found with the highest value of contrast, and chose the one with more luminance to map to the foreground.
[036] The second method was based on the poster background. It was found that in some cases, the background of the poster was not the darkest segment, and therefore, would not map on the background of the logo. Yet, this segment happened to be more representative of the poster. It was found that the poster background segment as the largest segment that ran along the edges of the top two-thirds of the poster (since bottom one third typically contained credits). These segment were assigned to the background segment of the logo. Out of the remaining segments, the segments with most contrast were assigned to the logo foreground. In doing so, only those segments were allowed where luminance L of both the segments was in an acceptable range (50 < L < 200), i.e., they were neither too dark, nor too bright. This was to prevent color bleeding which produced visually unflattering results.
[037] The remaining segments were merged in the blue segment which was discarded. The semantic segmentation approach segments the characters together, thus creating one foreground segment out of a variety of hues (e.g. skin tone, dog color, red dress, green dress). Its background segment clubs together of the shadow and the halo. It also produces an artifact-like cyan segment from the halo. The proposed color-based segmentation shown separates these hues and also separates the lighter shaded halo and the darker background. Using the dark background strategy, the darker background segment was assigned to the background and the closer character hues to the foreground.
[038] Further, as explained above in equation (1), the photo-realistic style transfer network was used for transferring the poster color style to the logo. The content and style images were provided to one network that performed a segment-by-segment artistic style transfer. Since more uniform color-based segments were

used, it reduced bleeding of style across segments. This network was made to run for 1500 epochs to generate the intermediate result, an artistic style transferred image. This image was provided to a second network, which enforced photo-realism on the image, allowing only locally affine transformation in color space. In every epoch, it minimized the photorealism regularization term, Lm. The network ran for 800 iterations for each poster-logo pair. Later a sharp shield from 100th iteration was combined with a smoothened background from 800th using the logo masks. Experimentally it was found that the content weight (α) and the style weight (β) to be 150 and 20 respectively.
Results
[039] A survey was also conducted to examine the representativeness and aesthetic appeal of the machine-stylized logos as perceived by the general audience. For this purpose, 112 logos were chosen at random from the machine-stylized logos. The visual attributes, viz. luminance, contrast and hue count of each style poster were measured to characterize it. The hue count was calculated from a 20-bin histogram over hue values. The poster population was divided based on luminance, contrast and hue count into two levels each (low - below the median, and high above the median). With these two levels for three parameters, eight groups of posters were created. In each survey, one example was included from each group as 8 random poster and logo pairs. Participants' opinion about two statements for each poster and logo pair was sked: 1) The logo represents the poster well (related to representativeness), and 2) The logo looks visually appealing (related to visual appeal). The participants rated these statements on a Likert scale of 1 (Strongly Disagree) to 5 (Strongly Agree). Since we wanted to find out whether the machine-generated logo without other styling dimensions such as texture and special effects could represent the poster well, it was specifically asked to the participants not to rate how well the logo represented colors in the poster.
[040] 287 responses were received to the survey. 63% participants were male, 35% were female, while 2% did not reveal their gender. 1% participants were below 18 years of age, 42% between 18-25, 40% between 25-35, 11%

between 35-45 and 6% above 45. An average of 20.5 responses were received per poster and logo pair. Average representativeness and appeal ratings were calculated per poster-logo pair as well as per group. It was observed that the representativeness rating was well correlated with the appeal rating (R2 = 0:64). So, a logo that was considered to be a good representation of the poster was also likely to be visually appealing to the participants.
[041] Fig. 4 shows box plots for the eight groups according to the visual properties of the poster. The group number converted to binary indicates the levels of luminance, contrast and hue count, respectively. Overall, group 3 (low luminance, high contrast, high hue count) had the highest mean ratings, while group 6 (high luminance, high contrast, low hue count) had the lowest ratings. A one-way ANOVA test was also conducted and it was found that rating differed significantly among the groups (F = 2:67; p < 0:05 for representativeness, F = 2:49; p < 0:05 for visual appeal). On post-hoc analysis, it was found that group 6 performed significantly worse than other groups except 5 for representativeness. For visual appeal, it performed significantly worse than group 3. It was believed that group 3 received the highest ratings, since these were darker posters with good contrast and high colorfulness. This combination allowed segmentation with some color variation, mostly a dark background to the logo background, and good contrast on the shield. This combination helped increase the visual appeal of the stylized logo. Group 6 received the lowest ratings, since these were brighter posters with high contrast but with less colors. Low hue count caused less color variety in segmentation, so while the darkest segments went to the background, bright segments with less color variation often went to the shield. This caused a glowing effect on the logo often not representative of the poster.
[042] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent

elements with insubstantial differences from the literal language of the claims.
[043] The embodiments of present disclosure herein solves the problems of computational effort, time and manual effort required for stylizing the advertisement and placing them on videos. The disclosure provides a method and system for automatically rendering the advertisement image over the video file.
[044] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[045] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[046] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological

development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[047] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[048] It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.

WE CLAIM:
1. A method (200) for rendering an advertisement image over a video file,
the method comprising a processor implemented steps of:
providing the video file of a fixed duration as an input (202);
selecting a time duration in the input for displaying the advertisement image based on a predefined criteria (204);
selecting a set of frames representing the time duration in the input (206);
segmenting each of the set of frames into a number of segments using a segmentation algorithm, wherein the number of segments are the source segments having source styles (208);
segmenting the advertisement image into a number of advertisement segments (210);
transferring the source style from the source segments to the corresponding advertisement segments using a photorealistic style transfer algorithm to create a series of stylized advertisement images (212);
determining an optimal position for the placement of the stylized advertisement images in the entire fixed duration of the video file (214); and
rendering the stylized advertisement image in the style of the video file during the fixed duration, wherein the stylized advertisement image maintains temporal continuity during the fixed duration (216).
2. The method of claim 1 further comprising the step of removing iterative optimization of the style transfer.
3. The method of claim 1, wherein the rendering is done for a series of images, wherein the series of images are either a GIF file or a video file.

4. The method of claim 1, wherein the optimal position is dependent on optical flow, illumination, texture, contrast and proximity to a salient region in the video
5. The method of claim 1 wherein the predefined criteria comprises the duration of the video which has limited motion.
6. The method of claim 1 further comprising the step of selecting the advertisement image from a set of advertisement images depending on the basis of context and aesthetic considerations.
7. The method of claim 1, wherein the segmentation of the each of the set of frames is based on hues characteristics of the set of frames.
8. The method of claim 1, wherein the stylized advertisement image is rendered as an overlay or as a part of the video file.
9. A system (100) for rendering an advertisement image over a video file, the system comprises:
an input module (102) for providing the video file of a fixed duration as an input;
a user interface (104) for selecting a time duration in the input for displaying the advertisement image based on a predefined criteria;
a memory (106); and
a processor (108) in communication with the memory, wherein the processor further comprises:
a selection module (110) for selecting a set of frames representing the time duration in the input;
a first segmentation module (112) for segmenting each of the set of frames into a number of segments using a segmentation

algorithm, wherein the number of segments are the source segments having source styles;
a second segmentation module (114) for segmenting the advertisement image into a number of advertisement segments;
a style transfer module (116) for transferring the source style from the source segments to the corresponding advertisement segments using a photorealistic style transfer algorithm to create a series of stylized advertisement images;
a position determining module (118) for determining an optimal position for the placement of the stylized advertisement images in the entire fixed duration of the video file; and
a rendering module (120) rendering the stylized advertisement image in the style of the video file during the fixed duration, wherein the stylized advertisement image maintains temporal continuity during the fixed duration.

Documents

Application Documents

# Name Date
1 201821049729-STATEMENT OF UNDERTAKING (FORM 3) [28-12-2018(online)].pdf 2018-12-28
2 201821049729-REQUEST FOR EXAMINATION (FORM-18) [28-12-2018(online)].pdf 2018-12-28
3 201821049729-FORM 18 [28-12-2018(online)].pdf 2018-12-28
4 201821049729-FORM 1 [28-12-2018(online)].pdf 2018-12-28
5 201821049729-FIGURE OF ABSTRACT [28-12-2018(online)].jpg 2018-12-28
6 201821049729-DRAWINGS [28-12-2018(online)].pdf 2018-12-28
7 201821049729-DECLARATION OF INVENTORSHIP (FORM 5) [28-12-2018(online)].pdf 2018-12-28
8 201821049729-COMPLETE SPECIFICATION [28-12-2018(online)].pdf 2018-12-28
9 201821049729-FORM-26 [14-02-2019(online)].pdf 2019-02-14
10 201821049729-Proof of Right (MANDATORY) [26-02-2019(online)].pdf 2019-02-26
11 Abstract1.jpg 2019-03-28
12 201821049729-ORIGINAL UR 6(1A) FORM 26-210219.pdf 2019-12-09
13 201821049729-ORIGINAL UR 6(1A) FORM 1-280219.pdf 2019-12-18
14 201821049729-FER_SER_REPLY [24-06-2021(online)].pdf 2021-06-24
15 201821049729-COMPLETE SPECIFICATION [24-06-2021(online)].pdf 2021-06-24
16 201821049729-CLAIMS [24-06-2021(online)].pdf 2021-06-24
17 201821049729-CLAIMS [24-06-2021(online)]-1.pdf 2021-06-24
18 201821049729-FER.pdf 2021-10-18
19 201821049729-US(14)-HearingNotice-(HearingDate-05-03-2024).pdf 2024-02-06
20 201821049729-FORM-26 [08-02-2024(online)].pdf 2024-02-08
21 201821049729-Correspondence to notify the Controller [04-03-2024(online)].pdf 2024-03-04
22 201821049729-Written submissions and relevant documents [12-03-2024(online)].pdf 2024-03-12
23 201821049729-PatentCertificate27-06-2024.pdf 2024-06-27
24 201821049729-IntimationOfGrant27-06-2024.pdf 2024-06-27

Search Strategy

1 2021-06-3012-42-43AE_30-06-2021.pdf
2 2020-12-1512-31-24E_15-12-2020.pdf

ERegister / Renewals

3rd: 02 Jul 2024

From 28/12/2020 - To 28/12/2021

4th: 02 Jul 2024

From 28/12/2021 - To 28/12/2022

5th: 02 Jul 2024

From 28/12/2022 - To 28/12/2023

6th: 02 Jul 2024

From 28/12/2023 - To 28/12/2024

7th: 19 Nov 2024

From 28/12/2024 - To 28/12/2025

8th: 20 Nov 2025

From 28/12/2025 - To 28/12/2026