Abstract: A method and system for authenticating video content during a video call is provided. The method includes initiating the video call from a first mobile device to a second mobile device. The method includes capturing the video call in real time. Further, the method includes generating a watermark payload from unique identification details of at least one of the first mobile device and the second mobile device. Furthermore, the method includes applying the watermark payload on the video content. Further, the method includes transmitting watermarked video content from the first mobile device to the second mobile device.
METHOD AND SYSTEM FOR AUTHENTICATING VIDEO CONTENT DURING A VIDEO CALL
FIELD OF THE INVENTION
[1] The present invention relates to the field of mobile devices, and more specifically to the field of watermarking a video content to protect privacy and security during a video call in mobile devices.
BACKGROUND
[2] With the growing demand for the use of digital video streaming in many real time video applications, there is a need for video authentication in order to protect privacy. Various methods exist to verify the genuineness of video streams.
[3] One such method includes distributing a group key for a video conference using a one-time password. The method includes generating and encrypting the group key and transmitting it to a client device. The encrypted group key transmitted to the client device can be decoded with the generated one time password. Further, an acknowledgement message is generated by the decoded group key and is transmitted back to a server in order to participate in the video conference. This method is user friendly and ensures high-level security. However, it is difficult to stop the illegal use of the authorized video content once the content is rendered to a party.
[4] Digital watermarking is a well-known technique for source tracking where illegal use of any video data can be found and the offender can be traced. Various methods of watermarking a multimedia content exist. An existing method of watermarking includes, on receiving the request for a video content from a client device, a content server watermarks a portion of the video content before streaming the content towards the client device. However, in applications like video conference, the privacy and security of one's identity is of concern. The person at the other side can take snapshots or record the video call without the permission of initiating party of the video call. There is a need for a system to protect individual privacy so that conference content cannot be recorded, stored and distributed without permission of the initiating party. There is also a need for detecting the watermark so that the offender can be traced and prosecuted.
[5] Further, methods to embed watermark varies according to the device capability and network infrastructure. In a scenario where the device capability and network infrastructure is dynamic, selection of a suitable watermarking method is performed manually making such methods time inefficient. Other existing watermarking methods consume more processing time and also hamper video clarity.
[6] In light of the foregoing discussion, there is a need for a method and system capable of automatic embedding of watermark irrespective of device capabilities and network infrastructure with minimal processing time, thereby authenticating the video content for safeguarding individual privacy and preventing counterfeiting.
SUMMARY
[7] The above-mentioned needs are met by employing a system which can embed watermark in real time when a call is initiated by a mobile device. The system automatically checks the processing capability of mobile devices in communication and network bandwidth available to decide whether watermarking is to be initiated at a device side or a server side. Further, the system based on the processing capability and network bandwidth select the most efficient method to watermark the video call, thereby authenticating the video call.
[8] An example of a method of authenticating a video content during a video call includes initiating the video call from a first mobile device to a second mobile device. The method includes capturing the video call. The capturing of the video call is performed real time. Further, the method includes generating a watermark payload from unique identification details of at least one of the first mobile device and the second mobile device. Furthermore, the method includes applying the watermark payload on the video content. Further, the method includes transmitting watermarked video content from the first mobile device to the second mobile device.
[9] An example of a method of watermarking a video content during a video call includes initiating the video call from a first mobile device to a second mobile device. The first mobile device initiates the video call via a service provider. The method includes capturing the video call by at least one of the first mobile device and the second mobile device. The capturing of the video call is performed real time.
Further, the method includes analysing captured video call to predict components of the video content to be watermarked. The method includes generating a watermark payload based on unique identification details of one of the first mobile device, the second mobile device or a combination thereof. Furthermore, the method includes determining at least one of processing capability and network bandwidth to decide if watermarking is to be performed at one of the first mobile device, the second mobile device, the service provider or a combination thereof. Further, the method includes applying at least one of gray scale watermark and pattern watermark to the video content based on the processing capability and the network bandwidth.
[10] An example of a mobile device for watermarking a video content during a video call includes a camera module to capture the video call. The capturing of the video call is performed in real time. The mobile device includes an intelligent processing module coupled to the camera module to receive a captured video call. The intelligent processing module includes an analysing module to analyse the captured video call to predict the components of the video content to be watermarked. The intelligent processing module includes an identification module to determine at least one of processing capability and network bandwidth for selecting the mode of watermark to be applied on the video call. Further, the intelligent processing module includes a decision module to decide if watermarking is to be performed at one of a first mobile device, a second mobile device, and a service provider based on at least one of the processing capability and the network bandwidth. The mobile device also includes a watermark generator to generate watermark payload from unique identification details of one of the first mobile device, the second mobile device or a combination thereof. Further, the mobile device includes an encoder module to embed the watermark payload on the video content.
[11] The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF FIGURES
[12] In the following drawings like reference numbers are used to refer to like elements. Although the following figures depict various examples of the invention, the invention is not limited to the examples depicted in the figures.
[13] FIG. 1 is a block diagram of an environment, in accordance with which various embodiments can be implemented;
[14] FIG. 2 is a system for watermarking a video call, in accordance with one embodiment of the invention;
[15] FIG. 3 is a flowchart illustrating watermarking at a first mobile device and a second mobile device during a video call, in accordance with another embodiment of the invention;
[16] FIG. 4 is a flowchart illustrating watermarking at a service provider side during a video call, in accordance with yet another embodiment of the invention;
[17] FIG. 5 is a block diagram illustrating pattern watermarking with geometry and payload pattern generator, in accordance with one embodiment of the invention;
[18] FIG. 6 is an exemplary illustration of skin segmentation pattern and block partitioned pattern, in accordance with another embodiment of the invention;
[19] FIG. 7 is an illustration of gray scale watermarking, in accordance with one embodiment of the invention;
[20] FIG. 8 is a flowchart illustrating pattern watermark detection, in accordance with one embodiment of the invention;
[21] FIG. 9 is a flowchart illustrating gray scale watermark detection, in accordance with another embodiment of the invention; and
[22] FIG. 10 is a flow diagram illustrating the method of embedding watermark in a video call, in accordance with one embodiment of the invention.
DETAILED DESCRIPTION
[23] A method and system for authenticating a video call by watermarking a video content in real time during a video call is explained in the following description. Watermarking protects the video content from counterfeiting. The watermark embedded on a video can be detected and the offender can be traced and prosecuted in case of piracy.
[24] In the present disclosure, relational terms such as first and second, and the like, may be used to distinguish one entity from the other, without necessarily implying any actual relationship or order between such entities.
[25] The following detailed description is intended to provide exampleimplementations to one of ordinary skill in the art, and is not intended to limit the invention to the explicit disclosure, as one or ordinary skill in the art will understand that variations can be substituted that are within the scope of the invention as described.
[26] FIG. 1 is a block diagram of an environment 100. The environment 100 includes a caller 105, a mobile network 110, a service provider 115, and a callee 120. The caller 105 hereinafter referred to as first mobile device 105 initiates a video call to the callee 120, hereinafter referred to as second mobile device 120. Examples of mobile devices include but are not limited to a mobile phone, a tablet device, a personal digital assistant (PDA), a smart phone and a laptop. The first mobile device 105 communicates with the second mobile device 120 via the mobile network 110. The mobile network 110 is a high speed mobile network that supports audio and video calls. Examples of the mobile network 110 include but are not limited to Universal Mobile Telecommunications System (UMTS) and other high speed data networks. The mobile network 110 can include one or more service providers, for example, the service provider 115.
[27] When the first mobile device 105 initiates a video call to the second mobile device 120 the video call is captured and further processed for embedding a watermark. In one example, in order to watermark the video call, the first mobile device 105 identifies the processing capability of the second mobile device 120. The network bandwidth of the mobile network 110 is also identified. Further, the watermark is embedded based on the processing capability and the network bandwidth. The steps involved in watermarking are explained in detail in conjunction with FIG. 2.
[28] FIG. 2 illustrates a system 200 for watermarking a video call. The system 200 includes a camera module 205, an intelligent processing module 210, a watermark generator 230, and an encoder 235. The camera module 205 captures a video in real time when a video call is initiated from a first mobile device to a second mobile device. Further, the captured image is fed into an intelligent processing module 210.
[29] The intelligent processing module 210 includes an analysing module 215,an identification module 220, and a decision module 225. The analysing module 215 analyses the video and generate a vector by segmenting the image based on skin colour. The vector generated from skin colour segmentation is utilized to predict components of the video content to be watermarked. The identification module 220 identifies the processing capability of the mobile devices. The identification module 220 further identifies the network bandwidth of the channel. The processing capability and the network bandwidth are identified to select the mode of watermarking to be performed. The different mode of watermarking includes but is not limited to gray scale watermarking and pattern watermarking. The gray scale watermarking is performed if the mobile device is a low end processing device. The pattern watermarking is performed if the mobile device is a high end processing device. Further, the information on processing capability of the mobile device and network bandwidth is fed to a decision module 225.
[30] The decision module 225 decides if the watermarking is to be performed at one of the first mobile device, the second mobile device or a service provider. In one embodiment of the invention, watermarking can be done at both the first mobile device and the second mobile device during a video call. If both the first mobile device and the second mobile device are less capable and is not having the inbuilt feature of watermarking, then watermarking is initiated at the server side. The watermark generator 230 makes use of unique identification details of the mobile device and network to generate a watermark payload to be embedded on the video. The unique identification detail includes but is not limited to International Mobile Station Equipment Identity [IMEI] number, phone number, and channel-50 data. The channel 50 data includes the regional information of the mobile device. The encoder 235 receives the predicted video content to be watermarked from the intelligent processing module 210. Further, the encoder 235 receives the watermark payload from the watermark generator 230. The encoder 235 embeds the watermark payload on the video content by using one of the selected modes of watermarking. The watermark payload is embedded on at least one of skin, face and body parts regions in the video content.
[31] FIG. 3 illustrates watermarking performed in a mobile device during a video call. The video call is initiated from a first mobile device 305 to a second mobile device 310 via a service provider. The first mobile device 305 initiates watermarking in at least one of the first mobile device 305 and the second mobile device 310 based on the processing capability and network bandwidth.
[32] The block 315 illustrates the flow diagram of watermarking the captured video call at the first mobile device 305. The first mobile device 305 initiates a video call to a second mobile device 310 via a service provider. When a call is initiated, a camera module in the first mobile device 305 starts capturing the video call. Further watermarking is performed on the captured video call on real time.
[33] At step 325, the skin colour segmentation of the captured video call is performed. Skin colour segmentation is a process of discrimination between skin and non-skin pixels. The first step of skin colour segmentation is to choose the suitable colour space. RGB colour space and HSV colour space are most commonly used colour spaces for video tracking and surveillance. Further, different skin modelling techniques are performed to model the distribution of skin and non-skin colour pixel. Skin segmentation is performed to reduce the regions of watermarking in a video.
[34] At step 330, face detection is performed to determine the locations and sizes of human faces in the captured video. The step of face detection is performed by any one of the traditional methods which include but is not limited to sequential image/frame analysis, diamond/ellipse based analysis etc. The auto focus feature inbuilt in the video camera can also be used for efficient face detection. Face detection is done to reduce the overheads of watermarking.
[35] At step 335, a watermark generator generates a watermark payload for embedding on the captured video. The watermark generator makes use of unique identification details of the mobile device to generate the watermark payload. The unique identification detail includes but is not limited to International Mobile Station Equipment Identity [IMEI] number, phone number, and channel-50 data. The channel 50 data further includes the regional information of the mobile device. The unique identification detail in the watermark payload makes the source identification easy in case of any illegal use of the video.
[36] At step 340, the face in the captured video which is to be watermarked is identified. On capturing a video call there may be multiple faces in the background. The multiple faces in the captured video are identified by performing the steps of skin segmentation and face detection. Further, the faces in a video to be watermarked are selected based on the dominance and size of each face. The dominant face is determined among multiple faces in the captured video. The size of the dominant face is compared with the other faces. If the difference in size of dominant face is greater than a threshold percentage when compared to other faces, then the dominant face is watermarked. If the difference in size of dominant face compared to other faces is less than a threshold percentage, then the remaining faces are watermarked along with the dominant face.
[37] At step 345, the captured video signal is watermarked by embedding watermark payload. The mode of watermarking to be applied is selected based on the device capability and network bandwidth. The different modes of watermark include but are not limited to pattern watermarking and gray scale watermarking. The pattern watermarking is applied if the first mobile device 305 is a high end processing device. The gray scale watermarking is applied if the first mobile device 305 is a low end processing device. The watermarking is done by embedding the watermark payload generated from the watermark generator on the captured video.
[38] At step 350, the generated watermarked video is transmitted to the second mobile device 310 via a service provider.
[39] The block 320 illustrates the flow diagram of watermarking the captured video call at the second mobile device 310. The first mobile device 305 initiates a video call to the second mobile device 310 via the service provider. When a video call is initiated the camera module in the second mobile device 310 starts capturing the video. Further, watermarking is performed on the captured video at the second mobile device 310.
[40] At step 355, the skin colour segmentation of the captured video call is performed. Skin colour segmentation is a process of discrimination between skin and non-skin pixels. The first step of skin colour segmentation is to choose the suitable colour space. RGB colour space and HSV colour space are most commonly used colour spaces for video tracking and surveillance. Further, different skin modelling techniques are performed to model the distribution of skin and non-skin colour pixel. Skin segmentation is performed to reduce the regions of watermarking in a video.
[41] At step 360, face detection is performed that determines the locations and sizes of human faces in the captured video. The face detection is performed by any one of the traditional methods which include but is not limited to sequential image/frame analysis, diamond/ellipse based analysis etc. An auto focus feature inbuilt in the video camera can also be used for efficient face detection. Face detection is done to reduce the overheads of watermarking.
[42] At step 365, a watermark generator generates a watermark payload to be embedded into the captured video. The watermark generator makes use of the unique identification details of the mobile device to generate the watermark payload. The unique identification detail includes but is not limited to International Mobile Station Equipment Identity [IMEI] number, phone number, and channel-50 data. The channel 50 data further includes the regional information of the mobile device. The unique identification detail for embedding the watermark makes the source identification easy in case of any illegal use of the video.
[43] At step 370, the face in the video which is to be watermarked is identified.
On capturing a video call there may be multiple faces in the background. The multiple faces in the captured video are identified by performing the steps of skin segmentation and face detection. Further, the faces in a video to be watermarked are selected based on the dominance and size of each face. The dominant face is determined among multiple faces in the captured video. The size of the dominant face is compared with the other faces. If the difference in size of dominant face is greater than a threshold percentage then the dominant face is watermarked. If the difference in size of dominant face compared to other faces is less than a threshold percentage, then the remaining faces are watermarked along with the dominant face.
[44] At step 375, the captured video signal is watermarked by embedding the watermark payload on the video content. The mode of watermarking to be applied is selected based on the device capability and network bandwidth. The different modes of watermark include but are not limited to pattern watermarking and gray scale watermarking. The pattern watermarking is applied if the second mobile device 310 is a high end processing device. The gray scale watermarking is applied if the second mobile device 310 is a low end processing device. The watermarking is done by embedding the watermark payload generated from the watermark generator on the captured video. The watermarking is done on at least one of the identified faces and the selected components of the captured video or the area decided by the decision module or based upon the watermark mode according to the device capability.
[45] At step 380, the generated watermarked video is transmitted to the first mobile device 305 via a service provider.
[46] In one embodiment of the invention, one of the first mobile device 305 and the second mobile device 310 is notified of any change in settings of the watermarking application. For example, the first mobile device 305 is notified if the watermarking is disabled at the second mobile device 310. Likewise, the second mobile device 310 is notified if the watermarking is disabled at the first mobile device 305. Further, a person using the mobile device can decide if the person wants to continue the video call. This gives high security and robustness for the method of watermarking.
[47] FIG. 4 illustrates a flow chart depicting watermarking at the server side of a service provider 415. The watermarking is performed at the service provider 415 if the processing capability of both a first mobile device 405 and a second mobile device 410 is below a predetermined value required for watermarking. Further, server side watermarking is done if the network bandwidth is low. The service provider 415 performs server side watermarking based on the following steps.
[48] At step 420, the signals from at least one of the first mobile device 405 and the second mobile device 410 are received by the service provider 415. The signal received includes at least one of the captured video, unique details to be watermarked, and metadata of captured video. The first mobile device 405 and the second mobile device 410, after identifying the processing capability of mobile device determine if the watermarking is to be performed at the mobile device or at the service provider 415. If the processing capability of the mobile device is insufficient to perform watermarking, the captured video call is transmitted to the service provider 415.
[49] At step 425, the capability of the service provider 415 is determined to check if a third party service provider 430 is required for watermarking. The service provider 415 through which the first mobile device 405 communicates to the second mobile device 410 may not support the feature of watermarking. In such case a third party service provider 430 is involved to perform watermarking of the video signal. The third party service provider 430 receives information which include but is not limited to the captured video, unique details to be watermarked on the captured video, and metadata of the captured video from the service provider 410. If metadata of captured video is not available then the third party service provider 430 performs the steps of skin segmentation and face detection to identify the video content to be watermarked. Further the third party service provider 430 embed watermark on the captured video. The third party service provider 430 can generate revenue by providing the service of watermarking. If the service provider 415 has watermarking capabilities then steps 435 to steps 465 are carried out.
[50] At step 435, the received signal from the mobile device is analysed to find if the signal includes metadata of the captured video. The metadata of the captured video is generated by pre-processing the video. The metadata of the captured video can be utilized by the service provider 415 to apply watermark on the video. The metadata of the captured video contain information related to the components of the video which include but is not limited to object details of the video, skin colour segmentation details, face detection details and other colour information. On analysing the received signal, if the metadata of the received signal is identified then step 440 is performed. If the metadata of the captured video is not identified in the received signal, then step 445 is performed.
[51] At step 440, the colour and skin segmented data is collected and parsed from the metadata. The components of the captured video to be watermarked are identified from the skin segmented data. The process of watermarking on the selected components and face reduces overhead of watermarking.
[52] At step 445, the video signal is analysed to predict the components of the
video to be watermarked. In one embodiment of the invention, the analysis of the video is performed by extracting l-frames of the captured video, l-frame represents intra-coded frame among a group of pictures. The skin colour segmentation is performed on the l-frames to identify the skin regions of the captured video. The content of the captured video to be watermarked can be predicted from skin segmentation and face detection details of the video.
[53] At step 450, the watermark payload is generated from unique identification details of the mobile devices by a watermark generator. The unique identification detail includes but is not limited to International Mobile Station Equipment Identity [IMEI] number, phone number, and channel-50 data. The channel 50 data further includes the regional information of the mobile device. The information to be watermarked is identified and a watermark payload is generated.
[54] At step 455, the watermark is embedded on the video. The generated watermark payload and the predicted components of the captured video are used to embed watermark on the video. The watermark is embedded by applying at least one of gray scale watermarking and pattern watermarking. The gray scale watermarking is applied if network bandwidth is low. The pattern watermarking is applied if the processing capability and network bandwidth is high.
[55] At step 460, the watermarked video is regenerated. At step 465, the regenerated signal is transmitted to the first mobile device 405 and second mobile device 410.
[56] FIG. 5 illustrates a block diagram 500 for pattern watermarking a captured video. On initiating a video call, a camera module in the mobile device captures the video call. The captured video is the video input source 505 and is fed to a video encoder 515 through a buffer 510. The video input source 505, can be one of different formats of multimedia data. Examples of formats of multimedia data include but are not limited to Real media, MP3 files, MP4 files and MPEG files. In one embodiment of the invention, the video input source 505 received by the video encoder 515 is a MPEG transport stream of signal or any other format of video data or multimedia data. The selection of transport stream is not limiting the scope of invention to MPEG transport stream rather to explain the technical aspect of the present invention. The MPEG transport stream includes a group of pictures which include I frame, B frame and P frame. I frames is extracted and fed to a skin colour segmentation module 520. The skin colour segmentation module 520 identifies the colour space suitable for the stream which includes RGB colour space. Further, different skin modelling techniques are performed to model the distribution of skin and non-skin colour pixel. The components on the video input source or any multimedia content input source to be watermarked are predicted by utilizing skin colour segmentation details. Further a scaling factor for making the watermark invisible is calculated from the skin colour segmentation details. The skin and face regions are segmented and fed to a watermark strength adjustment module 525. The watermark strength adjustment module 525 adjusts intensity of blue colour channel in each region of the segmented multimedia content. Among the different colour channels, the blue colour channel is selected since human eyes are less sensitive to detect the changes in blue colour channel. Moreover, the blue channel intensity is adjusted so that the watermark applied remains invisible in the video content. The watermark channel strength is increased or decreased in the watermark strength adjustment module 525 based on the segmented face and skin regions.
[57] The unique identification details of the mobile device are identified by an identification module 530. The unique identification detail includes but is not limited to International Mobile Station Equipment Identity [IMEI] number, phone number, and channel-50 data. The channel 50 data further includes the regional information of the mobile device. The unique identification details from the identification module 530 are fed to a payload pattern generator 535. The payload pattern generator 535 identifies the details to be embedded and generate a pattern for embedding watermark. The geometry pattern generator 540 generates geometry for applying watermark in the video signal. The payload pattern from the payload pattern generator 535 and geometry pattern from the geometry pattern generator 540 are fed to a watermark generator 545. The watermark generator 545 generates a pattern watermark based on the payload pattern and geometry pattern. The pattern watermark is applied on the predicted components of the video signal which include skin, face and body parts in the video content. The watermarked video is further transmitted by a transmission module 550.
The equation for pattern watermarking is as follows:
p(x>y) = I(x.y) + a x w(x,y) (Equation 1)
where p(x, y) is the watermarked video, l(x, y) is the original video input, w(x, y) is the watermark pattern and a is the scaling factor which is calculated by skin segmentation for invisibility of watermark.
[58] FIG. 6 illustrates the watermark applied on the skin, face and body parts in a video content. Watermarking the face and skin segments reduces the overhead in watermarking and protect the privacy and security of one's identity. Face detection can be efficiently performed on both low end and high end devices. The region 605 illustrates the watermarked face region of the individual. The face detection is performed by any one of the traditional methods which include but is not limited to sequential image/frame analysis, diamond/ellipse based analysis etc. An auto focus feature inbuilt in the video camera can also be used for efficient face detection. The region 610 depicts another method for applying watermark on the identified face in an image.
[59] FIG. 7 illustrates grayscale watermarking performed on the captured video call. The gray scale watermarking is applied for low end processing mobile devices. Further, gray scale watermarking is performed if the network bandwidth is low. The gray scale watermarking utilises amplitude modulation for watermarking the captured video. A gray scale image in the captured video call is split into a number of bit plane layers. In FIG. 7 the gray scale image is split into 8 bit plane layers from bit plane 0 to bit plane 7. Further, the mid bit plane among the layers of bit plane is identified. The watermark payload information is modulated on the mid bit plane.
The equation for embedding gray scale watermarking is as follows:
Vix.y) = I(x,y) + w(x,y) (Equation 2)
where p(x, y) is the watermarked video, l(x, y) is the original video and w(x, y) is the watermark payload.
[60] FIG. 8 illustrates the flow diagram for efficiently detecting pattern watermark embedded on a video content. In one example, watermark detection is performed when a person take snapshots or record the video call without the permission of initiating party of the video call. When a call is initiated from a first mobile device to a second mobile device, there exists a possibility that a person can record the video content from the second mobile device with a camera. If the video content is watermarked with an invisible watermark before transmission from the first mobile device, the offender can be traced. The tracing of the offender recording a video content from second mobile device can be done by detecting the invisible watermark in any copy of the recorded video content.
[61] At step 805, an averaged data is received by a watermark detector. Theaveraged data includes the watermarked data.
[62] At step 810, a high frequency component is extracted from the received averaged data. The high frequency component contains the watermark payload embedded in the original video content. Once the high frequency component is extracted, the watermark payload can be estimated.
The estimated watermark payload is given by the following equation:
w(x,y) = p(x,y)-p'(x,y) (Equation 3)
where w(x, y) is the estimated watermark payload, p(x, y) is the received signal, and p'(x, y) is the estimated original signal.
[63] At step 815, the auto-correlation function (ACF) of the extracted watermark is calculated. The ACF of a signal can be expressed as the convolution of the signal and its geometric inverse form. Thus Fast Fourier Transform (FFT) and the Inverse Fast Fourier Transform (IFFT) can be used to calculate the ACF to reduce time.
[64] At step 820, a reference pattern of the watermark payload is generated.
The reference pattern is generated with the same unique identification details used in embedding procedure. The unique identification detail includes but is not limited to International Mobile Station Equipment Identity [IMEI] number, phone number, and channel-50 data. The channel 50 data further includes the regional information of the mobile device.
[65] At step 825, the peak value of the signal is estimated. The extracted watermark payload is restored to its original geometry using peak pattern. The peak value is detected in two steps. At first, the local maximums in the ACF are determined. A small window slides over the entire ACF, and the local maximum in each window is selected. This process removes high correlation values that are not correct peaks. After the pre-processing, the original peak value is obtained by a peak detector from the correct peaks.
[66] At step 830, the geometry pattern is extracted to compensate the geometric distortions which include but is not limited to translation, rotation, misalignment, scaling, cropping and tilde. The geometry pattern extracted is utilised for compensating the geometric distortions in the estimated watermark signal. The payload pattern extracted is compared against the stored payload patterns to identify the content of the payload in the video.
[67] FIG. 9 illustrates the flow diagram detection of watermark in gray scale watermarking. In gray scale watermarking the watermark payload is embedded directly on the mid bit plane of the video signal.
[68] At step 905, the mid bit plane which is the watermarked plane in the video is extracted. The watermarked plane is known to the receiver.
[69] At step 910, the high frequency signal is extracted from the mid bit plane
or the watermarked plane which is identified at the step 905. The high frequency component of a video contains the watermarked signal embedded in the original video.
[70] At step 915, the watermark payload is extracted from the high frequency component of the video signal. The unique details embedded in the watermark can be identified by comparing the watermark payload with the stored pattern.
[71] FIG. 10 is a flow diagram illustrating the steps of embedding watermark in a video call.
[72] At step 1005, a video signal to be watermarked is received from a camera module. In the present invention watermarking is done in real time when a video is captured by the camera module.
[73] At step 1010, the analysis of the received video signal is performed. In one embodiment of the invention the received video signal is a transport stream data which include I frame, B frame and P frame. I frame or intra-coded frame is extracted from the transport stream data.
[74] At step 1015, the analysed video signal is organized for selective skin segmentation.
[75] At step 1020, skin colour segmentation is performed on the organized signal. Skin colour segmentation is a process of discrimination between skin and non-skin pixels. The first step of skin colour segmentation is to choose the suitable colour space. RGB colour space and HSV colour space are most commonly used colour spaces for video tracking and surveillance. Further, different skin modeling techniques are performed to model the distribution of skin and non-skin colour pixel. Skin segmentation is performed to reduce the regions of watermarking in a video maintaining the privacy of the video. Further, face detection is also performed on the organised data.
[76] At step 1025, the component of the received signal which is to be watermarked is predicted based on skin colour segmentation and face detection. If there are multiple faces in a video signal, the faces to be watermarked are also identified. The faces in a video to be watermarked are selected based on the dominance and size of each face. The dominant face is determined among multiple faces in the captured video. The size of the dominant face is compared with the other faces. If the difference in size of dominant face is greater than a threshold percentage then the dominant face is watermarked. If the difference in size of dominant face compared to other faces is less than a threshold percentage, then all the other faces are watermarked along with the dominant face.
[77] At step 1030, the unique identification details of the mobile device and the network is identified. The unique identification detail includes but is not limited to International Mobile Station Equipment Identity [IMEI] number, phone number, and channel-50 data. The channel 50 data further includes the regional information of the mobile device.
[78] At step 1035, the watermark payload is generated using the unique identification details. The details to be embedded on the video signal are selected and fed to a payload pattern generator. The geometry in which watermark is applied is generated from a geometry pattern generator. Using the payload pattern and geometry pattern a watermark is generated.
[79] At step 1040, the watermark payload is embedded on the received signal by at least one of geometry pattern watermarking and payload pattern watermarking based on the processing capability and network bandwidth.
[80] Advantageously, the embodiments specified in the present disclosure provide effective and robust security for video content in a video call. The system provides a method to analyse and adapt to the current capability of the environment. The intelligence of the system enables it to identify the device processing capability among the parties involved in communication and the network bandwidth, in order to automatically switch between the watermarking method and place of watermarking which include but is not limited to source, destination or service provider. The system provides efficient utilization of bandwidth. Further, the method involves selective skin segmentation and face detection which reduces the overheads in watermarking.
[81] Moreover, the system provides an efficient method to trace an offender who takes snapshots or records the video call without the permission of initiating party of the video call.
[82] In the preceding specification, the present disclosure and its advantages have been described with reference to specific embodiments. However, it will be apparent to a person of ordinary skill in the art that various modifications and changes can be made, without departing from the scope of the present disclosure, as set forth in the claims below. Accordingly, the specification and figures are to be regarded as illustrative examples of the present disclosure, rather than in restrictive sense. All such possible modifications are intended to be included within the scope of present disclosure.
I/We claim:
1. A method of authenticating a video content during a video call, the method comprising:
initiating the video call from a first mobile device to a second mobile device;
capturing the video call, wherein capturing is performed real time;
generating a watermark payload from unique identification details of at least one of the first mobile device and the second mobile device;
applying the watermark payload on the video content; and
transmitting watermarked video content from the first mobile device to the second mobile device.
2. The method as claimed in claim 1, wherein applying the watermark payload comprises applying the watermark payload on at least one of skin, face, and body parts in the video content.
3. The method as claimed in claim 1 further comprising detecting an invisible watermark to trace an offender recording the video content from the second mobile device.
4. The method as claimed in claim 4, wherein detecting the invisible watermark comprises a step of subtracting an estimated original video content from the watermarked video content.
5. The method as claimed in claim 1, wherein applying the watermark payload on the video content comprises:
analysing the video call to predict components of the video content to be watermarked;
determining at least one of processing capability and network bandwidth to decide if watermarking is to be performed at one of the first mobile device, the second mobile device, and a service provider; and
applying at least one of gray scale watermark and pattern watermark to the video content based on the processing capability and the network bandwidth.
6. The method as claimed in claim 5, wherein analysing the video call
comprises:
applying auto focus feature for face detection; extracting l-frames from the video call;
applying skin segmentation on colour regions of the l-frames; and generating a vector from skin segmentation and face detection to identify the components of the video content to be watermarked.
7. The method as claimed in claim 1, wherein the unique identification details comprise one or more of an International Mobile Station Equipment Identity [IMEI] number, phone number, and channel-50 data, wherein the channel-50 data is regional information of a mobile device.
8. The method as claimed in claim 1 further comprising:
notifying the first mobile device if watermarking is disabled at the second mobile device; and
notifying the second mobile device if watermarking is disabled at the first mobile device.
9. A method of watermarking a video content during a video call, the method
comprising:
initiating the video call from a first mobile device to a second mobile device, wherein the first mobile device initiates the video call via a service provider;
capturing the video call by at least one of the first mobile device and the second mobile device, wherein capturing is performed real time;
analysing captured video call to predict components of the video content to be watermarked;
generating a watermark payload based on unique identification details of at least one of the first mobile device and the second mobile device;
determining at least one of processing capability and network bandwidth to decide if watermarking is to be performed at one of the first mobile device, the second mobile device, and the service provider; and
applying at least one of gray scale watermark and pattern watermark to the video content based on the processing capability and the network bandwidth.
10. The method as claimed in claim 9, wherein applying at least one of the gray
scale watermark and the pattern watermark comprises applying the gray scale
watermark and the pattern watermark on at least one of skin, face, and body parts in
the video content.
11. The method as claimed in claim 9 further comprising detecting an invisible watermark to trace an offender recording the video content from the second mobile device.
12. The method as claimed in claim 11 wherein detecting the invisible watermark comprises a step of subtracting an estimated original video content from a watermarked video content.
13. The method as claimed in claim 9, wherein analysing the captured video call comprises:
applying auto focus feature for face detection; extracting l-frames from the captured video call; applying skin segmentation on colour regions of the l-frames; and generating a vector from skin segmentation and face detection to identify the components of the video content to be watermarked.
14. The method as claimed in claim 9, wherein the unique identification details comprise one or more of an International Mobile Station Equipment Identity [IMEI] number, phone number, and channel-50 data, wherein the channel-50 data is regional information of a mobile device.
15. The method as claimed in claim 9, wherein watermarking is performed at the service provider, if the processing capability of the first mobile device and the second mobile device is below a predetermined value required for watermarking.
16. The method as claimed in claim 9, wherein applying at least one of the gray scale watermark and the pattern watermark comprises:
applying the gray scale watermark if a mobile device is a low end processing device;
applying the gray scale watermark if the network bandwidth is low; and applying the pattern watermark if the mobile device is a high end processing device.
17. The method as claimed in claim 9, wherein applying the gray scale watermark comprises a step of amplitude modulation, wherein the step of amplitude modulation comprises modulating the captured video call with the watermark payload.
18. The method as claimed in claim 9, wherein the watermark payload is generated from the unique identification details by at least one of geometry pattern generator and payload pattern generator.
19. The method as claimed in claim 9 further comprising:
notifying the first mobile device if watermarking is disabled at the second mobile device; and
notifying the second mobile device if watermarking is disabled at the first mobile device.
20. A mobile device for watermarking a video content during a video call, the mobile device comprising:
a camera module to capture the video call, wherein capturing of the video call is performed real time;
an intelligent processing module coupled to the camera module to receive a captured video call, wherein the intelligent processing module comprises:
an analysing module to analyse the captured video call and to predict
components of the video content to be watermarked;
an identification module to determine at least one of processing
capability and network bandwidth and for selecting a mode of watermark to be
applied on the video content; and
a decision module to decide if watermarking is to be performed at one
of a first mobile device, a second mobile device, and a service provider based
on at least one of the processing capability and the network bandwidth;
a watermark generator to generate a watermark payload from unique identification details of at least one of the first mobile device and the second mobile device; and
an encoder to embed the watermark payload on the video content.
21. The mobile device as claimed in claim 20, wherein the mode of watermark comprises at least one of gray scale watermark and pattern watermark.
| Section | Controller | Decision Date |
|---|---|---|
| 15 | chetashri parate | 2023-04-10 |
| 15 | chetashri parate | 2023-04-10 |
| # | Name | Date |
|---|---|---|
| 1 | 2898-CHE-2013 FORM-5 01-07-2013.pdf | 2013-07-01 |
| 2 | 2898-CHE-2013 FORM-3 01-07-2013.pdf | 2013-07-01 |
| 3 | 2898-CHE-2013 FORM-18 01-07-2013.pdf | 2013-07-01 |
| 4 | 2898-CHE-2013 FORM-1 01-07-2013.pdf | 2013-07-01 |
| 5 | 2898-CHE-2013 FORFM-2 01-07-2013.pdf | 2013-07-01 |
| 6 | 2898-CHE-2013 CORRESPONDENCE OTHERS 01-07-2013.pdf | 2013-07-01 |
| 7 | 2898-CHE-2013 DRAWINGS 01-07-2013.pdf | 2013-07-01 |
| 8 | 2898-CHE-2013 CLAIMS 01-07-2013.pdf | 2013-07-01 |
| 9 | 2898-CHE-2013 POWER O FATTORNEY 01-07-2013.pdf | 2013-07-01 |
| 10 | 2898-CHE-2013 DESCRIPTION (COMPLETE) 01-07-2013.pdf | 2013-07-01 |
| 11 | 2898-CHE-2013 ABSTRACT 01-07-2013.pdf | 2013-07-01 |
| 12 | Covering Letter 2898-CHE-2013.pdf | 2014-04-02 |
| 13 | 2898-CHE-2013 FORM-13 18-07-2015.pdf | 2015-07-18 |
| 14 | Form 13_Address for service.pdf | 2015-07-20 |
| 15 | Amended Form 1.pdf | 2015-07-20 |
| 16 | Form 3 [08-07-2016(online)].pdf | 2016-07-08 |
| 17 | 2898-CHE-2013-FORM-26 [27-11-2017(online)].pdf | 2017-11-27 |
| 18 | 2898-CHE-2013-RELEVANT DOCUMENTS [19-02-2018(online)].pdf | 2018-02-19 |
| 19 | 2898-CHE-2013-Changing Name-Nationality-Address For Service [19-02-2018(online)].pdf | 2018-02-19 |
| 20 | 2898-CHE-2013-FORM 3 [24-12-2018(online)].pdf | 2018-12-24 |
| 21 | 2898-CHE-2013-FER.pdf | 2019-03-07 |
| 22 | 2898-CHE-2013-PETITION UNDER RULE 137 [06-09-2019(online)].pdf | 2019-09-06 |
| 23 | 2898-CHE-2013-OTHERS [06-09-2019(online)].pdf | 2019-09-06 |
| 24 | 2898-CHE-2013-FER_SER_REPLY [06-09-2019(online)].pdf | 2019-09-06 |
| 25 | 2898-CHE-2013-DRAWING [06-09-2019(online)].pdf | 2019-09-06 |
| 26 | 2898-CHE-2013-CORRESPONDENCE [06-09-2019(online)].pdf | 2019-09-06 |
| 27 | 2898-CHE-2013-COMPLETE SPECIFICATION [06-09-2019(online)].pdf | 2019-09-06 |
| 28 | 2898-CHE-2013-CLAIMS [06-09-2019(online)].pdf | 2019-09-06 |
| 29 | 2898-CHE-2013-ABSTRACT [06-09-2019(online)].pdf | 2019-09-06 |
| 30 | Correspondence by Agent_Form26_12-09-2019.pdf | 2019-09-12 |
| 31 | Correspondence by Agent_Form26_12-09-2019..pdf | 2019-09-12 |
| 32 | 2898-CHE-2013-RELEVANT DOCUMENTS [04-03-2020(online)].pdf | 2020-03-04 |
| 33 | 2898-CHE-2013-FORM 13 [04-03-2020(online)].pdf | 2020-03-04 |
| 34 | 2898-CHE-2013-AMENDED DOCUMENTS [04-03-2020(online)].pdf | 2020-03-04 |
| 35 | 2898-CHE-2013-FORM 3 [01-07-2020(online)].pdf | 2020-07-01 |
| 36 | 2898-CHE-2013-FORM 3 [24-09-2021(online)].pdf | 2021-09-24 |
| 37 | 2898-CHE-2013-FORM 3 [19-12-2022(online)].pdf | 2022-12-19 |
| 38 | 2898-CHE-2013-US(14)-HearingNotice-(HearingDate-16-03-2023).pdf | 2023-02-23 |
| 39 | 2898-CHE-2013-Correspondence to notify the Controller [13-03-2023(online)].pdf | 2023-03-13 |
| 40 | 2898-CHE-2013-FORM-26 [15-03-2023(online)].pdf | 2023-03-15 |
| 41 | 2898-CHE-2013-Written submissions and relevant documents [31-03-2023(online)].pdf | 2023-03-31 |
| 42 | 2898-CHE-2013-RELEVANT DOCUMENTS [31-03-2023(online)].pdf | 2023-03-31 |
| 43 | 2898-CHE-2013-POA [31-03-2023(online)].pdf | 2023-03-31 |
| 44 | 2898-CHE-2013-POA [31-03-2023(online)]-1.pdf | 2023-03-31 |
| 45 | 2898-CHE-2013-FORM 13 [31-03-2023(online)].pdf | 2023-03-31 |
| 46 | 2898-CHE-2013-FORM 13 [31-03-2023(online)]-1.pdf | 2023-03-31 |
| 47 | 2898-CHE-2013-AMENDED DOCUMENTS [31-03-2023(online)].pdf | 2023-03-31 |
| 48 | 2898-CHE-2013-AMENDED DOCUMENTS [31-03-2023(online)]-1.pdf | 2023-03-31 |
| 49 | 2898-CHE-2013-PatentCertificate10-04-2023.pdf | 2023-04-10 |
| 50 | 2898-CHE-2013-IntimationOfGrant10-04-2023.pdf | 2023-04-10 |
| 1 | search_06-03-2019.pdf |