Sign In to Follow Application
View All Documents & Correspondence

Dynamic Video Processing

Abstract: The present subject matter discloses a system and method for dynamically processing video. The system is configured to identify a video frame, from a video captured in real-time, comprising an object having an average intensity value above a predetermined threshold. The system is further configured to determine a contour associated with the object. The system is further configured to generate a glitter corresponding to the object. The glitter is generated based on dimensions of the object and an average RGB value of pixels associated with the object. The system is further configured to superimpose the glitter at one or more locations along the contour of the object to generate a processed video frame. The system is further configured to replace the video frame, in the video, with the processed video frame to generate a processed video.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 March 2016
Publication Number
18/2016
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
ip@legasis.in
Parent Application

Applicants

HCL Technologies Limited
B-39, Sector 1, Noida 201 301, Uttar Pradesh, India

Inventors

1. PANDEY, Anurag
HCL Technologies Ltd, SEZ Plot No 3A Software Tower-2, Sec 126, Noida - 201301, Uttar Pradesh, India
2. KUMAR, Arun
HCL Technologies Ltd, SEZ Plot No 3A Software Tower-2, Sec 126, Noida - 201301, Uttar Pradesh, India
3. KHURANA, Nitin
HCL Technologies Ltd, SEZ Plot No 3A Software Tower-2, Sec 126, Noida- 201301, Uttar Pradesh, India

Specification

CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
[001] The present application does not claim priority from any patent application.
TECHNICAL FIELD
[002] The present subject matter described herein, in general, relates to a system and method for dynamically processing a video, more particularly the system and method relates to real-time processing of video frames to add graphical effects.
BACKGROUND
[003] With the growth in animation and image processing techniques, the trend of adding artificial effects in images and videos has increased marginally. Enhancing and highlighting a special region through video post-processing is a common technique used these days for adding effects to videos. In some commercials, the artificial effect is clearly visible. For example, a false sparkle over a jewelry set, shining stars etc are added to the video for generating the artificial glittering effects. To bring such effects in videos, usually animation is accomplished by extracting and processing the images on frame by frame bases from the running video.
[004] In the video post-processing, animators identify the shining objects and select the region manually to place sparkles by rendering the image with similar or almost similar color images like stars and sparkles. Image rendering is a procedure to merge two identical formats of the images to bring the overlapping effect. The overlapping effect can be controlled by the alpha parameters of the images to bring the transparency and variation in color. However, the process of video post-processing takes a huge amount of time and use of various expensive offline tools, to bring the desired effects.
[005] To bring the sparkling effects on the gems and stones, animators typically use various post processing tools and below method:
1. Changing the captured video in desired number of frames
2. Manual extracting the shining objects and gems in the video
3. Selecting the a rendering image, matching with the color and size of the gem
4. Rendering/merging the two images and combining back the starts/sparkles frames to desired FPS
3
[006] However, the process of adding effects is totally dependent on the skills of animator. For a good output of post video processing some additional skills, like knowledge of actual direction/source of light, colors exhibited by the crystals in presence of external light are also required. The knowledge of perfect size of the sparkle, and distance between the sparkle and real object. Knowledge of frame rate w.r.t. sparkle appearance time is also required. Mismatch in any of the above fact can hamper the originality of the video and can make the video very ordinary or artificial and no use for commercials or highlighting.
SUMMARY
[007] This summary is provided to introduce aspects related to a system and method for dynamically processing a video are further described below in the detailed description. This summary is not intended to identify essential features of subject matter nor is it intended for use in determining or limiting the scope of the subject matter.
[008] In one implementation, a system for dynamically processing a video is disclosed. The system may comprise a processor and a memory coupled to the processor, wherein the processor is configured to execute instructions stored in the memory. In one embodiment, the processor may execute instructions stored in the memory to identify a video frame, from a video captured in real-time. The video frame may comprise an object having an average intensity value above a predetermined threshold. The processor may further execute instructions stored in the memory to determine a contour associated with the object. The processor may further execute instructions stored in the memory to generate a glitter corresponding to the object. The glitter is generated based on dimensions of the object and an average RGB value of pixels associated with the object. The processor may further execute instructions stored in the memory to superimpose the glitter at one or more locations along the contour of the object to generate a processed video frame. The processor may further execute instructions stored in the memory to replace the video frame, in the video, with the processed video frame to generate a processed video.
[009] In another implementation, a method for dynamically processing a video is disclosed. The method may comprise a step of identifying a video frame, from a video captured in real-time. The video frame may comprise an object having an average intensity value above a predetermined threshold. The method may further comprise a step of determining a contour associated with the object. The method may further comprise a step of
4
generating a glitter corresponding to the object. The glitter is generated based on dimensions of the object and an average RGB value of pixels associated with the object. The method may further comprise a step of superimposing the glitter at one or more locations along the contour of the object to generate a processed video frame. The method may further comprise a step of replacing the video frame, in the video, with the processed video frame to generate a processed video.
[0010] Yet in another implementation a non-transitory computer readable medium embodying a program executable in a computing device for dynamically processing a video is disclosed. The program may comprise a program code for identifying a video frame, from a video captured in real-time. The video frame may comprise an object having an average intensity value above a predetermined threshold. The program may further comprise a program code for determining a contour associated with the object. The program may further comprise a program code for generating a glitter corresponding to the object. The glitter is generated based on dimensions of the object and an average RGB value of pixels associated with the object. The program may further comprise a program code for superimposing the glitter at one or more locations along the contour of the object to generate a processed video frame. The program may further comprise a program code for a step of replacing the video frame, in the video, with the processed video frame to generate a processed video.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.
[0012] Figure 1 illustrates network implementation of a system for dynamically processing a video captured in real-time.
[0013] Figure 2 illustrates the system for dynamically processing a video captured in real-time, in accordance with an embodiment of the present subject matter.
[0014] Figure 3 illustrates a method for dynamically processing a video captured in real-time using the system, in accordance with an embodiment of the present subject matter.
5
DETAILED DESCRIPTION
[0015] The present subject matter relates to a system and method for dynamically processing a video captured in real-time. In one embodiment, the system is configured to bring the natural sparkling effects over gems and shining objects in a video captured by a camera at run-time. In one embodiment, the system is configured to analyze the video to identify one or more shining objects in at least one video frame of the video. The shining object may be identified based on a highest intensity region in the video frame. The highest intensity region may be identified based on a predetermined intensity threshold. Once the highest intensity regions are captured, rest of the region is masked by the system for the further operation. In one embodiment, system is configured to analyze the captured frames at the run time and identify the following aspects of the shining object:
- The colors of the shining object
- Size of the shining object
- Intensity of the shining object
- Speed of the sparkling effect
[0016] In the real-time situation, the system is configured to render the shining object with the similar colored objects hereafter referred to as glitters. The glitters may comprise sparkles and stars. The size of the glitters may be in proportion with the size of the shining object to gives a natural effect in the video under processing.
[0017] In one embodiment, the system is configured to select a video frame from the video, comprising the shining object. For this purpose the system is configured to classify the pixels in the video frame based on intensity, to identify the set of objects in the video frame. Further, the system is configured to extract one or more objects with high intensity value from the set of objects in the video frame, using a RGB color lookup tables created for various types of Gems such as topaz, ruby, diamond, crystals and sapphire. The RGB color lookup tables are stored inside the memory of the system. In one embodiment, the system may identify shining objects based on pixels having intensity over a predefined threshold, using adaptive thresholding techniques. The adaptive thresholding technique enables separation of the current video frame in to two separate objects parts namely a residual frame and the shining objects.
6
[0018] In one embodiment, after identification and extraction of one or more shining objects like Gems and jewelers from the video frame, the system is configured to process the shining objects to determine color and size of a glitter/ rendering object that is to be added to the shining object. The rendering object may be a six arm stars, an eight arm stars or a plain dot. In one embodiment, the size of the rendering object is in the range of 2/5 to 1/3 of the shining object inside square and oval shapes, which may be selected by a random function. To select the color of rendering object (star and/ or dot), the Lookup table stored in the memory is used by the system. The RGB value (color) of the rendering object may be selected based on the RGB range corresponding to the average RGB value of the shining object.
[0019] In one embodiment, the system is configured to identify contours corresponding to the shining objects. The contours may be maintained in the form of a function, wherein the function may be in the form of random pair of (x, y) coordinates on a local contour. The rendering/ superimposition of the rendering objects is performed on the pair of (x, y) coordinate at the boundary of the contour associated with the shining object. In one embodiment, the live stream of the captured frame by a camera is displayed on a LCD display. The system is configured to perform analysis on frame by frame bases to identify the video frame with the shining object and perform the rendering operation.
[0020] In one embodiment, at the final stage, the rendered objects are integrated back to the original video frame with the shining object in order to generate a processed video frame. The system is configured to perform the image rendering only on the objects in the video frame with high intensity pixels which make a close contour. The size of the rendering object is decided with the size of contour or the size of the shining object. The color of the rendering object is decided by the Lookup Table (LUT) by color matching with the high intensity pixels, using the LUT. The residual frame in the video doesn’t undergo any modification or processing. Once the rendering is done on the high intensity objects, the processed shining object with the rendered objects is merged back with the residual frame to generate the processed video. This process of selecting a video frame for rendering is repeated periodically, wherein the frequency of rendering may be based on the FPS of the video or the displacement of the shining object in the video from one location to another.
[0021] While aspects of described system and method for dynamically processing a video captured in real-time may be implemented in any number of different computing systems,
7
environments, and/or configurations, the embodiments are described in the context of the following exemplary system.
[0022] Referring now to Figure 1, a network implementation 100 of a system 102 for dynamically processing a video captured in real-time is disclosed. Although the present subject matter is explained considering that the system 102 is implemented on a server, it may be understood that the system 102 may also be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, and the like. Further, the system 102 may also reside over a video capturing device 110, wherein the video capturing device may be a camera, a webcam, or any other video/ image capturing device. In one implementation, the system 102 may be implemented in a cloud-based environment. It will be understood that the system 102 may be accessed by multiple users through one or more user devices 104-1, 104-2…104-N, collectively referred to as user devices 104 hereinafter, or applications residing on the user devices 104. The users of the user devices 104 may access the system 102 to process a video captured in real-time. Examples of the user devices 104 may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, and a workstation. The user devices 104 are communicatively coupled to the system 102 through a network 106.
[0023] In one implementation, the network 106 may be a wireless network, a wired network or a combination thereof. The network 106 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The network 106 may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further the network 106 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[0024] Further, the system 102 is connected to the video capturing device 110 through the network 106. In one embodiment, the video capturing device 110 is configured to capture video in real-time whereas the system 102 is configured to receive the video and process the video for dynamically adding effects and accordingly generate a processed video. The system
8
102 of dynamically processing the video captured in real-time is further elaborated with respect to figure 2.
[0025] Referring now to Figure 2, the system 102 is illustrated in accordance with an embodiment of the present subject matter. In one embodiment, the system 102 may include at least one processor 202, an input/output (I/O) interface 204, and a memory 206. The at least one processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 206.
[0026] The I/O interface 204 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 204 may allow the system 102 to interact with a user directly or through the client devices 104. Further, the I/O interface 204 may enable the system 102 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface 204 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface 204 may include one or more ports for connecting a number of devices to one another or to another server.
[0027] The memory 206 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 206 may include modules 208 and data 210.
[0028] The modules 208 include routines, programs, objects, components, data structures, etc., which perform particular tasks, functions or implement particular abstract data types. In one implementation, the modules 208 may include a video analysis module 212, a contour detection module 214, a glitter generation module 216, a frame processing module 218, a frame integration module 220, and other modules 222. The other modules 222 may include programs or coded instructions that supplement applications and functions of the system 102. The data 210, amongst other things, serves as a repository for storing data processed,
9
received, and generated by one or more of the modules 208. The data 210 may also include a local repository 226, and other data 228.
[0029] In one embodiment, the video capturing devices 110 is configured to capture a video on frame by frame bases. In one embodiment, the system 102 may be implemented in the video capturing device 110. In another embodiment, the system 102 may be enabled over a server, wherein the video capturing device 110 is configured to communicate with the server and transfer video frames in real-time to the system 102 through wired or wireless communication means. In one embodiment, the system 102 may be situated at a remote location and may communicate with the video capturing device 110 through a network 106 for receiving the video frames captured by the video capturing device 110 in real-time.
[0030] In one embodiment, as soon as the video frames captured by the image capturing device 110 are received at the system 102, the video analysis module 212 is configured to identify a video frame comprising a shining object hereafter referred to as the object, having an average intensity value above a predetermined threshold. The object may comprise one or more gems, a jeweler set, or a precious ornament. For the purpose of identifying the video frame with the object, the video analysis module 212 is configured to identify a cluster of pixels in the video frame with average intensity value above the predetermined threshold. This cluster of pixels is identified as the object in the video frame.
[0031] In one embodiment, the contour detection module 214 is configured to determine a contour associated with the object. The contour may be determined by applying adaptive thresholding on the object in the video frame. The contour corresponds to the boundary of the object in the video frame.
[0032] Further, the glitter generation module 216 is configured to generate a glitter corresponding to the object. The glitter may be generated based on dimensions of the object and an average RGB value of pixels associated with the object. Further, the dimensions of the glitter may be determined by analyzing the object based on a lookup table. The lookup table (LUT) is configured to maintain a set of glitters with different dimensions, wherein each glitter from the set of glitters corresponding to a predefined dimension of an object. Further, the color of the glitter may be determined by the glitter generation module 216 based on the average RGB value of the object in the video frame.
10
[0033] In one embodiment, frame processing module 218 is configured to superimpose the glitter at one or more locations along the contour of the object to generate a processed video frame. In one embodiment, for the purpose of superimposing the glitter, the pixels in the video frame are replaced by corresponding color and intensity of the glitter. The glitter may be in the form of a shining star of dots.
[0034] In one embodiment, once the processed video frame is generated, in the next step, the frame integration module 220 is configured to replace the video frame, in the video, with the processed video frame to generate a processed video. The process of adding glitters and generating the processed video frames if performed on regular bases. As soon as the processed video frame is generated, the video frame in the video is replaced with processed video frame to generate the processed video. The processed video is then displayed to the user over the user device 104. The method for dynamically processing a video captured in real-time is further illustrated with respect to the block diagram of figure 3.
[0035] Referring now to figure 3, a method 300 for dynamically processing a video captured in real-time is disclosed, in accordance with an embodiment of the present subject matter. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, and the like, that perform particular functions or implement particular abstract data types. The method 300 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
[0036] The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300 or alternate methods. Additionally, individual blocks may be deleted from the method 300 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method 300 can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method 300 may be considered to be implemented in the above described system 102.
11
[0037] At block 302, as soon as the video frames captured by the image capturing device 110 are received at the system 102, the video analysis module 212 is configured to identify a video frame comprising a shining object hereafter referred to as the object, having an average intensity value above a predetermined threshold. The object may comprise one or more gems, a jeweler set, or a precious ornament. For the purpose of identifying the video frame with the object, the video analysis module 212 is configured to identify a cluster of pixels in the video frame with average intensity value above the predetermined threshold. This cluster of pixels is identified as the object in the video frame.
[0038] At block 304, the contour detection module 214 is configured to determine a contour associated with the object. The contour may be determined by applying adaptive thresholding on the object in the video frame. The contour corresponds to the boundary of the object in the video frame.
[0039] At block 306, the glitter generation module 216 is configured to generate a glitter corresponding to the object. The glitter may be generated based on dimensions of the object and an average RGB value of pixels associated with the object. Further, the dimensions of the glitter may be determined by analyzing the object based on a lookup table. The lookup table (LUT) is configured to maintain a set of glitters with different dimensions, wherein each glitter from the set of glitters corresponding to a predefined dimension of an object. Further, the color of the glitter may be determined by the glitter generation module 216 based on the average RGB value of the object in the video frame.
[0040] At block 308, frame processing module 218 is configured to superimpose the glitter at one or more locations along the contour of the object to generate a processed video frame. In one embodiment, for the purpose of superimposing the glitter, the pixels in the video frame are replaced by corresponding color and intensity of the glitter. The glitter may be in the form of a shining star of dots.
[0041] At block 310, once the processed video frame is generated, in the next step, the frame integration module 220 is configured to replace the video frame, in the video, with the processed video frame to generate a processed video. The process of adding glitters and generating the processed video frames if performed on regular bases. As soon as the processed video frame is generated, the processed video frame is integrated back into the video which is then displayed to the user over the user device 104.
12
[0042] Although implementations for methods and systems for dynamically processing a video captured in real-time has been described, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations for dynamically processing a video.

WE CLAIM:
1. A system for dynamically processing a video, the system comprising:
a memory; and
a processor coupled to the memory, wherein the processor is configured to execute program instructions stored in the memory to:
identify a video frame, from a video captured in real-time, comprising an object having an average intensity value above a predetermined threshold;
determine a contour associated with the object;
generate a glitter corresponding to the object, wherein the glitter is generated based on dimensions of the object and an average RGB value of pixels associated with the object;
superimpose the glitter at one or more locations along the contour of the object to generate a processed video frame; and
replace the video frame, in the video, with the processed video frame to generate a processed video.
2. The system of claim 1, wherein the object comprises one or more gems, a jeweler set, and a precious ornament.
3. The system of claim 1, wherein the contour is determined by applying adaptive thresholding on the object of the video frame.
4. The system of claim 1, wherein dimensions of the glitter are determined by analyzing the object based on a lookup table, wherein the lookup table is configured to maintain a set of glitters, wherein each glitter from the set of glitters corresponding to a predefined dimension of an object.
5. A method for dynamically processing a video, the method comprising steps of:
identifying, by a processor, a video frame, from a video captured in real-time, comprising an object having an average intensity value above a predetermined threshold;
determining, by the processor, a contour associated with the object;
14
generating, by the processor, a glitter corresponding to the object, wherein the glitter is generated based on dimensions of the object and an average RGB value of pixels associated with the object;
superimposing, by the processor, the glitter at one or more locations along the contour of the object to generate a processed video frame; and
replacing, by the processor, the video frame, in the video, with the processed video frame to generate a processed video.
6. The method of claim 5, wherein the object comprises one or more gems, a jeweler set, and a precious ornament.
7. The method of claim 5, wherein the contour is determined by applying adaptive thresholding on the object of the video frame.
8. The method of claim 5, wherein dimensions of the glitter are determined by analyzing the object based on a lookup table, wherein the lookup table is configured to maintain a set of glitters, wherein each glitter from the set of glitters corresponding to a predefined dimension of an object.
9. A non-transitory computer readable medium embodying a program executable in a computing device for dynamically processing a video, the program comprising:
a program code for identifying a video frame, from a video captured in real-time, comprising an object having an average intensity value above a predetermined threshold;
a program code for determining a contour associated with the object;
a program code for generating a glitter corresponding to the object, wherein the glitter is generated based on dimensions of the object and an average RGB value of pixels associated with the object;
a program code for superimposing the glitter at one or more locations along the contour of the object to generate a processed video frame; and
a program code for replacing the video frame, in the video, with the processed video frame to generate a processed video.

Documents

Application Documents

# Name Date
1 Form 9 [28-03-2016(online)].pdf 2016-03-28
2 Form 3 [28-03-2016(online)].pdf 2016-03-28
4 Form 18 [28-03-2016(online)].pdf 2016-03-28
5 Drawing [28-03-2016(online)].pdf 2016-03-28
6 Description(Complete) [28-03-2016(online)].pdf 2016-03-28
7 Form 26 [06-07-2016(online)].pdf 2016-07-06
8 201611010544-GPA-(11-07-2016).pdf 2016-07-11
9 201611010544-Form-1-(11-07-2016).pdf 2016-07-11
10 201611010544-Correspondence Others-(11-07-2016).pdf 2016-07-11
11 abstract.jpg 2016-07-15
12 201611010544-FER.pdf 2019-11-13

Search Strategy

1 search_strategy_201611010544_11-11-2019.pdf