Sign In to Follow Application
View All Documents & Correspondence

Method And System Of Standardizing Media Content For Channel Agnostic Detection Of Television Advertisements

Abstract: The present disclosure provides a method and system standardizing media content for channel agnostic detection of television advertisements in real time. The computer-implemented method includes a derivation of one or more characteristics corresponding to one or more features. The one or more features are associated with a media content for each channel of the plurality of channels. Further, the computer-implemented method includes a trimming of a pre-defined percentage of area in each frame of the media content. The trimming of the pre-defined percentage of area is performed based on the one or more characteristics corresponding to the one or more features associated with the media content. Further, the computer-implemented method includes a detection of the one or more advertisements broadcasted across the plurality of channels in the real time.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
09 March 2016
Publication Number
37/2017
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
nishantk@ediplis.com
Parent Application

Applicants

Silveredge Technologies Pvt. Ltd.
Plot No. 131, 2nd Floor, Sector 44, Gurgaon

Inventors

1. Debasish Mitra
Plot No. 131, 2nd Floor, Sector 44, Gurgaon 122002, Haryana
2. Hitesh Chawla
G1701, Bestech Park View Spa, Sector 47, Gurgaon – 122002, Haryana

Specification

TECHNICAL FIELD
 The present invention relates to the field of digital fingerprinting of media content and, in particular, relates to standardizing of media content for channel agnostic detection of television advertisements.
5
BACKGROUND
 Over the last few years, many new television channels have been launched. These television channels broadcast media content for the viewers. Each channel is differentiated or recognized by its own unique logo which makes it easy for the viewers to recognize the channel. In addition, some channels also 10 contain tickers displayed dynamically in the real time during a television broadcast of the media content. The broadcasted media content is identified by its digital fingerprints. These digital fingerprints corresponds to the advertisements are different from the digital fingerprints extracted for the same advertisement broadcasted on the other channel. This difference in the digital fingerprints of the 15 same advertisement broadcasted on different channels is due the presence or absence of different logos associated with each channel, presence or absence of tickers in the channels and the like. Owing to the mismatch in digital fingerprints of same advertisement across multiple channel results in troll detection of the same advertisement across multiple channels. The media content is standardized 20 for negating troll detection of same advertisements broadcasted across multiple channels.
 These advertisements can be primarily detected through an unsupervised machine learning based approach and a supervised machine learning 25 based approach. The unsupervised machine learning based approach focuses on detection of advertisements by extracting and analyzing digital fingerprints of each advertisement. Similarly, the supervised machine learning based approach focuses on mapping and matching digital fingerprints of each advertisement with a known set of digital fingerprints of corresponding advertisement. 30
Page 3 of 52
 Several systems and methods are currently available which perform detection of the advertisements broadcasted across the channels. In US patent application US 11/067,003 a method and a system for specifying regions of interest for video event detection is presented. The method includes receiving a 5 video stream and identifying a region of interest in a video stream. The region of interest is a portion of at least one image of the video stream. The region of interest in the video stream is analyzed to detect a video event in the region of interest.
10
 In another US patent application US 11/067,606 the method and the system for detecting a known video entity within a video stream is presented. The method includes receiving a video stream and continually creating statistical parameterized representations for windows of the video stream. The statistical parameterized representation windows are continually compared to windows of a 15 plurality of fingerprints. Each of the plurality of fingerprints includes associated statistical parameterized representations of a known video entity. A known video entity in the video stream is detected when a particular fingerprint of the plurality of fingerprints has at least a threshold level of similarity with the video stream.
20
 In yet another US patent application US 13/832,083 the methods and the systems providing broadcast ad identification is presented. Methods include the steps of: providing fingerprint signatures of each frame in a broadcast video; and designating at least two repeat fingerprint signatures upon detecting at least one fingerprint-signature match from the signatures. Preferably, methods further 25 includes a prior to the designating, determining whether the fingerprint signatures correspond to a known ad based upon detecting at least one fingerprint-signature match of the fingerprint signatures with pre-indexed fingerprint signatures of pre-indexed ads. Preferably, method further include: creating segments of the fingerprint signatures, ordered according to a timeline temporal proximity of the 30 fingerprint signatures, by grouping at least two fingerprint signatures based on a
Page 4 of 52
repeat temporal proximity of at least two repeat fingerprint signatures respective of at least two fingerprint signatures. Preferably, methods further include detecting at least one ad candidate based on an occurrence of at least one repeat segment.
5
 The present systems and methods have several disadvantages. In prior arts, the focus is on supervised detection of repeated advertisements. These prior arts extracts digital fingerprints without taking in consideration of the standardization of the media content broadcasted across the channels. In addition, these prior arts extract the digital fingerprints over the entire area of the frame 10 including the media content broadcasted and the logos or tickers displayed on the channel. Furthermore, these prior arts do not take into account the accuracy of matching the digital fingerprints of the advertisement broadcasting on the different channels. Moreover, the present disclosure accurately extracts the digital fingerprints associated with the advertisement broadcasting across the 15 channels. These methods and system are either not able to detect advertisements or imperfectly determine any new advertisements. In addition, these prior arts lack the precision and accuracy to differentiate programs from advertisements. These prior arts lack any approach and technique for unsupervised detection of any new advertisements. 20
 In light of the above stated discussion, there is a need for a method and system which overcomes the above stated disadvantages.
SUMMARY 25
 In an aspect, the present disclosure provides a computer-implemented method for standardizing media content for channel agnostic detection of television advertisements in real time. The computer-implemented method includes a derivation of one or more characteristics corresponding to one or more features. The one or more features are associated with a media content for each 30 channel of the plurality of channels. Further, the computer-implemented method
Page 5 of 52
includes a trimming of a pre-defined percentage of area in each frame of the media content. The trimming of the pre-defined percentage of area is performed based on the one or more characteristics corresponding to the one or more features associated with the media content. Further, the computer-implemented method includes a detection of the one or more advertisements broadcasted across 5 the plurality of channels in the real time.
 In an embodiment of the present disclosure, the one or more features associated with the channel includes a logo associated with the channel and a ticker associated with the channel. 10
 In an embodiment of the present disclosure, the one or more characteristics includes a first set of characteristics associated with a logo of the channel and a second set of characteristics associated with a ticker associated with the channel. The first set of characteristics includes a pre-defined height of the 15 logo, a pre-defined width of the logo and a pre-defined position of the logo. In addition, the second set of characteristics includes a pre-defined height of the ticker, a pre-defined width of the ticker and a pre-defined position of the ticker.
 In an embodiment of the present disclosure, the pre-defined 20 percentage of area in each frame is trimmed to a pre-defined scale and wherein the pre-defined scale of each frame is 640 x 480.
 In an embodiment of the present disclosure, the pre-defined percentage of area is 30%. 25
 In an embodiment of the present disclosure, the computer-implemented method further includes a normalization of each frame of a video corresponding to a broadcasted media content on the channel. The normalization of each frame is done based on a histogram normalization and a histogram 30
Page 6 of 52
equalization. Moreover, the normalization of each frame is done by adjusting luminous intensity value of each pixel to a desired luminous intensity value.
 In an embodiment of the present disclosure, the computer-implemented method further includes an extraction of a first set of audio 5 fingerprints and a first set of video fingerprints. The first set of audio fingerprints and the first set of video fingerprints corresponds to a media content broadcasting on the channel. The first set of audio fingerprints and the first set of video fingerprints are extracted sequentially in the real time. Moreover, the extraction of the first set of video fingerprints is done by sequentially extracting one or more 10 prominent fingerprints. The one or more prominent fingerprints corresponds to the one or more prominent frames of a pre-defined number of frames present in the media content for a pre-defined interval of broadcast.
 In an embodiment of the present disclosure, the computer-15 implemented method further includes a generation of a set of digital signature values. The digital signature values corresponds to an extracted set of video fingerprints. The generation of each digital signature value of the set of digital signature values is done by dividing each prominent frame of the one or more prominent frames into a pre-defined number of blocks. Further, each block of 20 each prominent frame of the one or more prominent frames is gray scaled. Furthermore, the generation of each digital signature value of the set of digital signature values is done by calculating a first bit value and a second bit value for each block of the prominent frame. In addition, the generation of each digital signature value of the set of digital signature values is done by obtaining a 32 bit 25 digital signature value corresponding to each prominent frame. Each block of the pre-defined number of block has a pre-defined number of pixels. The first bit value and the second bit value is calculated from comparison of a mean and a variance for the pre-defined number of pixels in each block of the prominent frame with a corresponding mean and variance for a master frame. The 30 corresponding mean and variance for the master frame is present in the master
Page 7 of 52
database. The 32 bit digital signature value is obtained by sequentially arranging the first bit value and the second bit value for each block of the pre-defined number of blocks of the prominent frame.
 In an embodiment of the present disclosure, the first bit value and the 5 second bit value are assigned a binary 0 when the mean and the variance for each block of the prominent frame is less the corresponding mean and variance of each master frame.
 In another embodiment of the present disclosure, the first bit value 10 and the second bit value are assigned a binary 1 when the mean and the variance for each block of the prominent frame is greater than the corresponding mean and variance of each master frame.
 In an embodiment of the present disclosure, the computer-15 implemented method further includes a detection of the one or more advertisements broadcasted on the channel. The detection of the one or more advertisements includes a supervised detection and an unsupervised detection.
 In an embodiment of the present disclosure, the unsupervised 20 detection of the one or more advertisements is done through one or more steps. The one or more steps includes a step of probabilistically matching a first pre-defined number of digital signature values of a real time broadcasted media content with a stored set of digital signature values present in the first database and the second database. The first pre-defined number of digital signature values 25 corresponds to a pre-defined number of prominent frames. Further, the one or more steps include a step of a comparison of one or more prominent frequencies and one or more prominent amplitudes of an extracted first set of audio fingerprints. The one or more steps further include a step of determination of a positive probabilistic match of the pre-defined number of prominent frames based 30
Page 8 of 52
on a pre-defined condition. Furthermore, the one or more steps include a step of fetching of a video and an audio clip corresponding to a probabilistically matched digital signature values. The one or more steps further include a step of checking for presence of the audio and the video clip manually in the master database. In addition, the one or more steps includes a step of reporting a positively matched 5 digital signature values corresponding to an advertisement of the one or more advertisement in a reporting database present in the first database. The probabilistic match is performed for the set of digital signature values by utilizing a temporal recurrence algorithm.
10
 In an embodiment of the present disclosure, the pre-defined condition includes a pre-defined range of positive matches corresponding to probabilistically matched digital signature values, a pre-defined duration of media content corresponding to the positive match. In addition, the pre-defined condition includes a sequence and an order of the positive matches and a degree 15 of positive match of a pre-defined range of number of bits of the first pre-defined number of signature values.
 In an embodiment of the present disclosure, the computer-implemented method further includes storage of the one or more characteristics, 20 the first set of audio fingerprints, the first set of video fingerprints and the set of digital signature values. In addition, the storage is done in a first database and a second database.
 In an embodiment of the present disclosure, the computer-25 implemented method further includes an updating of the one or more characteristics, the first set of audio fingerprints, the first set of video fingerprints and the set of digital signature values. In addition, the one or more characteristics, the first set of audio fingerprints, the first set of video fingerprints and the set of digital signature values detected are updated manually in a master 30 database.
Page 9 of 52
 In an embodiment of the present disclosure, the supervised detection of the one or more advertisements is done through one or more steps. The one or more steps includes a step of probabilistically matching a second pre-defined number of digital signature values corresponding to a pre-defined number of 5 prominent frames of a real time broadcasted media content with a stored set of digital signature values. The stored set of digital signature values is present in the master database. Further, the one or more steps includes a step of comparing the one or more prominent frequencies and the one or more prominent amplitudes corresponding to the extracted first set of audio fingerprints with a stored one or 10 more prominent frequencies and a stored one or more prominent amplitudes. Furthermore, the one or more steps include a determination of the positive match in the probabilistically matching of the second pre-defined number of digital signature values with the stored set of digital signature values in the master database. In addition, the one or more steps includes a step of comparing the one 15 or more prominent frequencies and the one or more prominent amplitudes corresponding to the extracted first set of audio fingerprints with the stored one or more prominent frequencies and the stored one or more prominent amplitudes.
 In another aspect, the present disclosure provides a computer 20 program product. The computer program product includes a non-transitory computer readable medium storing a computer readable program. The computer readable program when executed on a computer causes the computer to perform one or more steps. The one or more steps include a step of deriving one or more characteristics corresponding to one or more features. The one or more features 25 are associated with media content for each channel of the plurality of channels. Further, the one or more steps include a step of trimming a pre-defined percentage of area in each frame of the media content. The trimming of the pre-defined percentage of area is performed based on the one or more characteristics corresponding to the one or more features associated with the media content. 30
Page 10 of 52
Further, the one or more steps include a step of detecting the one or more advertisements broadcasted across the plurality of channels in the real time.
 In an embodiment of the present disclosure, the one or more features associated with the channel include the logo associated with the channel and the 5 ticker associated with the channel. The one or more characteristics include the first set of characteristics associated with the logo of the channel and the second set of characteristics associated with the ticker associated with the channel. The first set of characteristics includes the pre-defined height of the logo, the pre-defined width of the logo and the pre-defined position of the logo. In addition, 10 the second set of characteristics includes the pre-defined height of the ticker, the pre-defined width of the ticker and the pre-defined position of the ticker.
 In yet another aspect, the present disclosure provides an advertisement detection system for standardizing media content for channel 15 agnostic detection of television advertisements. The advertisement detection system includes a derivation module in a processor. The derivation module derives the one or more characteristics corresponding to one or more features. The one or more features are associated with the media content for each channel of the plurality of channels. Further, the advertisement detection system includes 20 a trimming module in the processor. The trimming module trims the pre-defined percentage of area in each frame of the media content. The trimming module trims based on the one or more characteristics corresponding to the one or more features associated with the media content. Further, the advertisement detection system includes a detection module in the processor. The detection module 25 detects the one or more advertisements broadcasted across the plurality of channels in the real time.
 In an embodiment of the present disclosure, the advertisement detection system further includes a normalization module in the processor. The 30
Page 11 of 52
normalization module normalizes each frame of the video corresponding to the broadcasted media content on the channel. The normalization module normalizes each frame based on the histogram normalization and the histogram equalization. In addition, the normalization module normalizes each frame by adjusting luminous intensity value of each pixel to the desired luminous intensity value. 5
BRIEF DESCRIPTION OF THE FIGURES
 Having thus described the invention in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein: 10
 FIG. 1A illustrates a system for standardizing media content for channel agnostic detection of television advertisements in real time, in accordance with various embodiments of the present disclosure;
 FIG. 1B illustrates a system for an unsupervised detection of the one or more advertisements broadcasted across the channels, in accordance with an 15 embodiment of the present disclosure;
 FIG. 1C illustrates a system for a supervised detection of the one or more advertisements broadcasted across the channels, in accordance with another embodiment of the present disclosure;
 FIG. 2 illustrates a block diagram of an advertisement detection 20 system, in accordance with various embodiments of the present disclosure;
 FIG. 3 illustrates a flow chart for channel feature agnostic detection of the one or more advertisements across channels, in accordance with various embodiments of the present disclosure; and
 FIG. 4 illustrates a block diagram of the portable communication 25 device, in accordance with various embodiments of the present disclosure.
 It should be noted that the accompanying figures are intended to present illustrations of exemplary embodiments of the present disclosure. These figures are not intended to limit the scope of the present disclosure. It should also be noted that accompanying figures are not necessarily drawn to scale. 30
Page 12 of 52
DETAILED DESCRIPTION
 Reference will now be made in detail to selected embodiments of the present disclosure in conjunction with accompanying figures. The embodiments described herein are not intended to limit the scope of the disclosure, and the present disclosure should not be construed as limited to the embodiments 5 described. This disclosure may be embodied in different forms without departing from the scope and spirit of the disclosure. It should be understood that the accompanying figures are intended and provided to illustrate embodiments of the disclosure described below and are not necessarily drawn to scale. In the drawings, like numbers refer to like elements throughout, and thicknesses and 10 dimensions of some components may be exaggerated for providing better clarity and ease of understanding.
 It should be noted that the terms "first", "second", and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish 15 one element from another. Further, the terms "a" and "an" herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item.
 FIG. 1A illustrates a system 100 for standardizing media content for 20 channel agnostic detection of television advertisements across a plurality of channels, in accordance with various embodiments of the present disclosure. The system 100 performs a supervised and an unsupervised detection of the one or more advertisements broadcasted across the channels in real time. In addition, the system 100 performs the detection of the one or more advertisements across the 25 channels based on one or more characteristics of one or more features associated with the channel (described below in the patent application). Moreover, the system 100 is configured to provide a setup for the detection of the one or more advertisements.
30
Page 13 of 52
 The system 100 includes a broadcast reception device 102, an advertisement detection system 106 and a master database 114. The above stated elements of the system 100 operate coherently and synchronously to detect the one or more advertisements present in media content broadcasted in the channel. The above stated elements of the system 100 operate coherently and 5 synchronously to detect the one or more advertisements based on the one or more properties of the channel. The broadcast reception device 102 is a channel feed receiving and processing device. In an embodiment of the present disclosure, the broadcast reception device 102 receives media content corresponding to the broadcasted content having audio in the pre-defined regional language or the 10 standard language. The media content corresponds to another channel. The broadcast reception device 102 is attached directly or indirectly to a receiving antenna or dish. The receiving antenna receives a broadcasted signal carrying one or more channel feeds. In an embodiment of the present disclosure, the broadcast reception device 102 receives media content corresponding to the broadcasted 15 content having audio in the pre-defined regional language or the standard language. The media content corresponds to the channel of the one or more channels 104. In an embodiment of the present disclosure, the receiving antenna receives the broadcast signal carrying a live feed associated with each of one or more channels. The one or more channel feeds are encoded in a pre-defined 20 format. In addition, the one or more channel feeds have a set of characteristics. The set of characteristics includes a frame rate, an audio sample rate, one or more frequencies and the like.
 The broadcasted signal carrying the one or more channel feeds is 25 initially transmitted from a transmission device. In an embodiment of the present disclosure, the broadcasted signal carrying the one or more channel feeds is a multiplexed MPEG-2 encoded signal having a constant bit rate. In another embodiment of the present disclosure, the broadcasted signal carrying the one or more channel feeds is a multiplexed MPEG-2 encoded signal having a variable bit 30 rate. In yet another embodiment of the present disclosure, the broadcasted signal
Page 14 of 52
carrying the one or more channel feeds is any digital standard encoded signal. The bit rate is based on complexity of each frame in each of the one or more channel feeds. The quality of the multiplexed MPEG-2 encoded signal will be reduced when the broadcasted signal is too complex to be coded at a constant bit-rate. The bit rate of the variable bit-rate MPEG-2 streams is adjusted dynamically 5 as less bandwidth is needed to encode the images with a given picture quality. In addition, the broadcasted signal is encrypted for a conditional access to a particular subscriber. The encrypted broadcast signal is uniquely decoded by the broadcast reception device 102 uniquely.
10
 In an example, a digital TV signal is received on the broadcast reception device 102 as a stream of MPEG-2 data. The MPEG-2 data has a transport stream. The transport stream has a data rate of 40 megabits/second for a cable or satellite network. Each transport stream consists of a set of sub-streams. The set of sub-streams is defined as elementary streams. Each elementary stream 15 includes an MPEG-2 encoded audio, an MPEG-2 encoded video and data encapsulated in an MPEG-2 stream. In addition, each elementary stream includes a packet identifier (hereinafter “PID”) that acts as a unique identifier for corresponding elementary stream within the transport stream. The elementary streams are split into packets in order to obtain a packetized elementary stream 20 (hereinafter “PES”).
 In an embodiment of the present disclosure, the broadcast reception device 102 is a digital set top box. In another embodiment of the present disclosure, the broadcast reception device 102 is a hybrid set top box. In yet 25 another embodiment of the present disclosure, the broadcast reception device 102 is an internet protocol television (hereinafter IPTV) set top box. In yet another embodiment of the present disclosure, the broadcast reception device 102 is any standard broadcast signal processing device. Moreover, the broadcast reception device 102 may receive the broadcast signal from any broadcast signal medium. 30
Page 15 of 52
 In an embodiment of the present disclosure, the broadcast signal medium is an ethernet cable. In another embodiment of the present disclosure, the broadcast signal medium is a satellite dish. In yet another embodiment of the present disclosure, the broadcast signal medium is a coaxial cable. In yet another embodiment of the present disclosure, the broadcast signal medium is a telephone 5 line having DSL connection. In yet another embodiment of the present disclosure, the broadcast signal medium is a broadband over power line (hereinafter “BPL”). In yet another embodiment of the present disclosure, the broadcast signal medium is an ordinary VHF or UHF antenna.
10
 The broadcast reception device 102 primarily includes a signal input port, an audio output port, a video output port, a de-multiplexer, a video decoder, an audio decoder and a graphics engine. The broadcast signal carrying the one or more channel feeds is received at the signal input port. The broadcast signal carrying the one or more channel feeds is de-multiplexed by the de-multiplexer. 15 The video decoder decodes the encoded video and the audio decoder decodes the encoded audio. The video and audio corresponds to a channel selected in the broadcast reception device 102. In general, the broadcast reception device 102 carries the one or more channel feeds multiplexed to form a single transporting stream. The broadcast reception device 102 can decode only one channel in real 20 time.
 Further, the decoded audio and the decoded video are received at the audio output port and the video output port. Further, the decoded video has a first set of features. The first set of features includes a frame height, a frame width, a 25 frame rate, a video resolution, an aspect ratio, a bit rate and the like. Moreover, the decoded audio has a second set of features. The second set of features includes a sample rate, a bit rate, a bin size, one or more data points, one or more prominent frequencies and one or more prominent amplitudes. Further, the decoded video may be of any standard quality. In an embodiment of the present 30 disclosure, the decoded video signal is a 144p signal. In another embodiment of
Page 16 of 52
the present disclosure, the decoded video signal is a 240p signal. In yet another embodiment of the present disclosure, the decoded video signal is a 360p signal. In yet another embodiment of the present disclosure, the decoded video signal is a 480p signal. In yet another embodiment of the present disclosure, the decoded video signal is a 720p video signal. In yet another embodiment of the present 5 disclosure, the decoded video signal is a 1080p video signal. In yet another embodiment of the present disclosure, the decoded video signal is a 1080i video signal. In yet another embodiment of the present disclosure, the decoded video signal is a 1440p video signal. In yet another embodiment of the present disclosure, the decoded video signal is a 2160p video signal. Here, p and i 10 denotes progressive scan and interlace scan techniques.
 Further, the decoded video and the decoded audio (hereinafter “media content”) are transferred to the advertisement detection system 106 through a transfer medium. The transfer medium can be a wireless medium or a wired 15 medium. Moreover, the media content includes one or more television programs, the one or more advertisements, one or more channel related data, subscription related data, operator messages and the like. In an embodiment of the present disclosure, the media content broadcasted on the channel of the one or more channels 104 uses a pre-defined regional language in the audio. In another 20 embodiment of the present disclosure, the media content broadcasted on the channel of the one or more channels 104 uses a standard language accepted nationally. The media content has a pre-defined frame rate, a pre-defined number of frames and a pre-defined bit rate for a pre-defined interval of broadcast.
25
 Further, the broadcast reception device 102 broadcasts one or more channels 104 on a user end device. The user end device is connected to the broadcast reception device 102. In addition, the connection is done through one or more cables. The one or more cables connect corresponding one or more ports on the user end device with corresponding one or more ports on the broadcast 30 reception device 102. The end user device is any device capable of allowing one
Page 17 of 52
or more users to access the one or more channels for watching media content in real time. In an embodiment of the present disclosure, the end user device includes a CRT television, a LED television, a LCD television, a plasma television and the like. In another embodiment of the present disclosure, the end user device is an internet connected television. 5
 Furthermore, each of the one or more channels may be any type of channel of various types of channels. The various types of channels include sports channels, movie channels, news channels, regional channels, music channels and various other types of channels. The broadcast reception device 102 10 is associated with a media content broadcast enabler. The media content broadcast enabler provides the broadcast reception device 102 to the one or more users. In an embodiment of the present disclosure, the media content broadcast enabler provides the broadcast reception device 102 for allowing the one or more users to access and view the media content on the corresponding user end device. 15 In an embodiment of the present disclosure, the media content broadcast enabler is associated with a company or an organization employed in construction and distribution of a plurality of broadcast reception devices.
 In an embodiment of the present disclosure, the media content 20 broadcast enabler acts as a third party interface for distributing the broadcast reception device 102 to the corresponding one or more users. Moreover, the media content broadcast enabler include but may not be limited to DTH (Direct to Home) provider, STB (set top box) provider, cable TV provider and the like. In an embodiment of the present disclosure, the media content broadcast enabler is 25 located in a vicinity of the one or more users. In an embodiment of the present disclosure, the media content broadcast enabler is enabled to provide one or more media broadcasting services to the one or more users. In an embodiment of the present disclosure, the media content broadcast enabler is allotted a pre-defined range or area for providing the one or more media broadcasting services to the 30 one or more users located or living in the pre-defined range or area.
Page 18 of 52
 Moreover, the media content broadcast enabler provides the one or more media content broadcasting services based on a subscription plan bought by the one or more users. The subscription plan corresponds to a plan from a pre-defined set of plans set by the media content broadcasting enabler and chosen by 5 the one or more users. In an embodiment of the present disclosure, the subscription plan includes a pre-defined list of channels and a pre-determined amount of money for availing the subscription plan.
 In an embodiment of the present disclosure, the one or more users 10 pay the pre-determined amount of money at a regular basis to the media content broadcasting enabler for availing the subscription plan. In an embodiment of the present disclosure, the one or more users avail the one or more media broadcasting services of a same media service provider (the media content broadcasting enabler). In an embodiment of the present disclosure, the media 15 content broadcasting enabler stores information of the one or more users in a server. In an embodiment of the present disclosure, the media content broadcast enabler maintains the server.
 Further, each of the one or more channels 104 is associated with one 20 or more features. The one or more features associated with the media content of the channel of the one or more channels 104. The one or more features include a logo associated with the channel and a ticker associated with the channel. Each channel of the one or more channels 104 has a unique logo. In general, the logo of a channel represents an identity of the channel. In addition, the logo represents 25 a unique name of the channel. The unique name is written in a graphical format. In an embodiment of the present disclosure, the logo is a unique identification for the channel. In an embodiment of the present disclosure, the logo of each of the one or more channels 104 appears on each video frame of the media content broadcasted on the one or more channels 104. In another embodiment of the 30
Page 19 of 52
present disclosure, the logo appears on some video frames during the broadcasting of the media content on the one or more channels 104.
 Furthermore, the ticker is a primarily horizontal text-based feature displayed on the screen of the channel of the one or more channels 104. In an 5 embodiment of the present disclosure, the ticker is displayed in the graphical format residing in a unique region of the screen of the channel of the one or more channels 104. In another embodiment of the present disclosure, the ticker is displayed as a network of a long and thin scoreboard-style display presenting headlines, minor pieces of public information and the like. 10
 In an embodiment of the present disclosure, the tickers are displayed as a plurality of scrolling text running from right to left across the screen of the channel. In another embodiment of the present disclosure, the tickers are displayed as the plurality of scrolling text running from left to right across the 15 screen of the channel. In another embodiment of the present disclosure, the tickers are displayed in a static manner utilizing a flipping effect. The flipping effect allows each individual headline of one or more headlines to be displayed on the screen of the channel for pre-defined time duration before transitioning to the next headline. In an example of news channel X, a headline Y is displayed on the 20 screen for 5 seconds before a headline Z tends to be displayed on the screen of the news channel X.
 Going further, the advertisement detection system 106 includes a first processing unit 108 and a second processing unit 110. The advertisement 25 detection system 106 has a built in media splitter configured to copy and transmit the media content synchronously to the first processing unit 108 and the second processing unit 110 in the real time. The first processing unit 108 includes a first central processing unit and associated peripherals for unsupervised detection of
Page 20 of 52
the one or more advertisements (also shown in FIG. 1B). The first processing unit 108 is connected to a first database 108a.
 The first processing unit 108 is programmed to perform normalization of each frame of a video corresponding to the media content 5 broadcasted across the channels. The first processing unit 108 normalizes each frame of the video based on histogram normalization. In addition, the first processing unit 108 normalizes each frame of the video based on histogram equalization. Moreover, the first processing unit 108 normalizes each frame by adjusting luminous intensity value of each pixel to a desired luminous intensity 10 value. For example, if an original luminous intensity range of any frame E of the video is 30-200 and the desired luminous intensity range is 0-255, the first processing unit 108 automatically adjust the original luminous intensity range by subtracting 30 from the luminous intensity value associated with the original luminous intensity range of each pixel. An intermediate luminous intensity range 15 obtained by the histogram normalization is 0-170. In addition, the first processing unit 108 multiplies the luminous intensity value of each pixel associated with the intermediate luminous intensity range by 255/170 to obtain the desired luminous intensity range of 0 –255.
20
 Further, the first processing unit 108 derives the one or more characteristics. The one or more characteristics correspond to the one or more features associated with the channel of the one or more channels 104. Moreover, the one or more characteristics include a first set of characteristics and a second set of characteristics. The first set of characteristics is associated with the logo of 25 the channel. In addition, the second set of characteristics is associated with the ticker displayed on the channel. Moreover, the first set of characteristics includes a pre-defined height of the logo, a pre-defined width of the logo, a pre-defined position of the logo and the like. In addition, the second set of characteristics includes a pre-defined height of the ticker, a pre-defined width of the ticker, a pre-30 defined position of the ticker and the like.
Page 21 of 52
 Further, the first processing unit 108 is programmed to trim a pre-defined percentage of area in each frame of the media content broadcasted on the channel of the one or more channels. In an embodiment of the present disclosure, the pre-defined percentage of area is 30% of a frame area. In another 5 embodiment of the present disclosure, the pre-defined percentage of area is any suitable area in each frame of the media content. The pre-defined percentage of area in each frame is trimmed based on the one or more characteristics of the one or more features associated with the media content. In an embodiment of the present disclosure, the pre-defined percentage of area is trimmed based on the 10 pre-defined height of the logo and the pre-defined width of the logo derived by the first processing unit 108. In another embodiment of the present disclosure, the pre-defined percentage of area is trimmed based on the pre-defined height of the ticker and the pre-defined width of the ticker derived by the first processing unit 108. In yet another embodiment of the present disclosure, the pre-defined 15 percentage of area is trimmed based on a combination of the pre-defined height of the logo and the pre-defined height of the ticker. In yet another embodiment of the present disclosure, the pre-defined percentage of area is trimmed based on the combination of the pre-defined width of the logo and the pre-defined width of the ticker. In yet another embodiment of the present disclosure, the pre-defined 20 percentage of area is trimmed based on the combination of the pre-defined height of the logo and the pre-defined width of the ticker. In yet another embodiment of the present disclosure, the pre-defined percentage of area is trimmed based on the combination of the pre-defined width of the logo and the pre-defined height of the ticker. 25
 Furthermore, the pre-defined percentage of area includes a first pre-defined region and a second pre-defined region. In an embodiment of the present disclosure, the first pre-defined region is associated with the logo of the channel. In another embodiment of the present disclosure, the first pre-defined region is 30 associated with the ticker of the channel. In an embodiment of the present
Page 22 of 52
disclosure, the second pre-defined region is associated with the logo of the channel. In another embodiment of the present disclosure, the second pre-defined region is associated with the ticker of the channel. In an embodiment of the present disclosure, the first processing unit 108 trims the first pre-defined region associated with the logo of the channel. In another embodiment of the present 5 disclosure, the first processing unit 108 trims the second pre-defined region associated with the ticker of the channel. In yet another embodiment of the present disclosure, the first processing unit 108 trims the first pre-defined region associated with the logo and the second pre-defined region associated with the ticker both. 10
 The first processing unit 108 trims the pre-defined percentage of area to a pre-defined scale. In an embodiment of the present disclosure, the pre-defined scale is 640 by 480. In another embodiment of the present disclosure, the pre-defined scale is 1024 by 768. In yet another embodiment of the present 15 disclosure, the pre-defined scale is 1124 by 768. In yet another embodiment of the present disclosure, the pre-defined scale is 1920 by 1080. In yet another embodiment of the present disclosure, the pre-defined scale is 1366 by 768. In yet another embodiment of the present disclosure, the pre-defined scale is any suitable scale. 20
 Going further, the first processing unit 106 is programmed to perform extraction of a first set of audio fingerprints and a first set of video fingerprints corresponding to the media content broadcasted on the channel. The first set of audio fingerprints and the first set of video fingerprints are associated with a 25 specific cropped area. The specific cropped area is obtained after normalizing, scaling and trimming of each frame. The first processing unit 108 trims the pre-defined percentage of area to obtain the specific cropped area. The first set of video fingerprints and the first set of audio fingerprints are extracted sequentially from the specific cropped area in the real time. The extraction of the first set of 30 video fingerprints is done by sequentially extracting one or more prominent
Page 23 of 52
fingerprints corresponding to one or more prominent frames associated with the media content. Each of the one or more prominent frames has the specific cropped area. Moreover, the one or more prominent frames correspond to the pre-defined interval of broadcast.
5
 For example, let the media content be related to a channel say, A. The channel A broadcasts a 1 hour news show between 9 PM to 10 PM. Suppose the media content is broadcasted on the channel A with a frame rate of 25 frames per second (hereinafter “fps”). Again let us assume that the channel A administrator has placed 10 advertisements in between 1 hour broadcast of the 10 news show. The first processing unit 108 separates audio and video from the media content corresponding to the news show in the real time. Further, the first processing unit 108 sets a pre-defined range of time to approximate duration of play of every advertisement. Let us suppose the pre-defined range of time is between 15 seconds to 35 seconds. The first processing unit 108 processes each 15 frame of the pre-defined number of frames of the 1 hour long news show. The first processing unit 108 filters and selects prominent frames having dissimilar scenes. The first processing unit 108 extracts relevant characteristics corresponding to each prominent frame. The relevant characteristics constitute a digital video fingerprint. Similarly, the first processing unit 108 extracts the first 20 set of audio fingerprints corresponding to the media content.
 Furthermore, each of the one or more prominent fingerprints corresponds to a prominent frame having sufficient contrasting properties compared to an adjacent prominent frame. For example, let us suppose that the 25 first processing unit 108 select 5 prominent frames per second from 25 frames per second. Each pair of adjacent frames of the 5 prominent frames will have evident contrasting properties. The first processing unit 108 generates a set of digital signature values corresponding to an extracted set of video fingerprints. The first processing unit 108 generates each digital signature value of the set of digital 30 signature values by dividing each prominent frame of the one or more prominent
Page 24 of 52
frames into a pre-defined number of blocks. In an embodiment of the present disclosure, the predefined number of block is 16 (4X4). In another embodiment of the present disclosure, the pre-defined number of blocks is any suitable number. Each block of the pre-defined number of blocks has a pre-defined number of pixels. Each pixel is fundamentally a combination of red (hereinafter 5 “R”), green (hereinafter “G”) and blue (hereinafter “B”) colors. The colors are collectively referred to as RGB. Each color of a pixel (RGB) has a pre-defined value in a pre-defined range of values. The predefined range of values is 0-255.
 In an example, the RGB for the pixel has value of 000000. The color 10 of pixel is black. In another example, the RGB for the pixel has a value of FFFFFF (255; 255; 255). The color of the pixel is white. Here, FF is hexadecimal equivalent of decimal, 255. In yet another example, the RGB for the pixel has a value of FF0000 (255, 0, 0). The color of the pixel is red. In yet another example, the RGB for the pixel has a value of 0000FF (0, 0, 255). The 15 color of the pixel is blue. In yet another example, the RGB for the pixel has a value of 008000 (0, 128, 0). The color of the pixel is green.
 The first processing unit 108 gray-scales each block of each prominent frame of the one or more prominent frames. In general, the gray-20 scaling of each block is a conversion of RGB to monochromatic shades of gray color. Here 0 represents black and 255 represents white. Further, the first processing unit 108 calculates a first bit value and a second bit value for each block of the prominent frame. The first bit value and the second bit value are calculated from comparing a mean and a variance for the pre-defined number of 25 pixels in each block of the prominent frame with a corresponding mean and variance for a master frame in the master database 114. The first processing unit 108 assigns the first bit value and the second bit with a binary 0 when the mean and the variance for each block of the prominent frame is less the corresponding mean and variance of each master frame. The first processing unit 108 assigns 30 the first bit value and the second bit value with a binary 1 when the mean and the
Page 25 of 52
variance for each block is greater than the corresponding mean and variance of each master frame.
 Furthermore, the first processing unit 108 obtains a 32 bit digital signature value corresponding to each prominent frame having the specific 5 cropped area. The 32 bit digital signature value is obtained by sequentially arranging the first bit value and the second bit value for each block of the pre-defined number of blocks of the prominent frame. The first processing unit 108 stores each digital signature value corresponding to each prominent frame of the one or more prominent frames in the first database 108a. The digital signature 10 value corresponds to the one or more programs and the one or more advertisements. The first processing unit 108 utilizes a temporal recurrence algorithm to detect the one or more advertisements. In temporal recurrence algorithm, the first processing unit 108 probabilistically matches a first pre-defined number of digital signature values with a stored set of digital signature 15 values present in the first database 108a.
 In an example, let us suppose that the first processing unit 106 generates 100 digital signature values corresponding to 100 prominent frames each having the specific cropped area in the first database 106a. The first 20 processing unit 106 probabilistically matches 20 digital signature values corresponding to 101st to 121st prominent frame with each 20 digital signature values corresponding to 100 previously stored prominent frames.
 The probabilistic match of the first pre-defined number of digital 25 signature values sequentially for each of the prominent frame is performed by utilizing a sliding window algorithm. In an embodiment of the present disclosure, the first pre-defined number of digital signature values of the set of digital signature values for the unsupervised detection of the one or more advertisements is 20. The first processing unit 108 determines a positive probabilistic match of 30 the pre-defined number of prominent frames based on a pre-defined condition.
Page 26 of 52
The pre-defined condition includes a pre-defined range of positive matches corresponding to probabilistically match digital signature values and a pre-defined duration of media content corresponding to the positive match. In addition, the pre-defined condition includes a sequence and an order of the positive matches and a degree of match of a pre-defined range of number of bits of the first pre-5 defined number of signature values. In an embodiment of the present disclosure, the pre-defined range of probabilistic matches corresponding to the positive match lies in a range of 40 matches to 300 matches. In another embodiment of the present disclosure, the pre-defined range of probabilistic matches corresponding to the positive match lies in a suitable duration of each advertisement running 10 time. In an embodiment of the present disclosure, the first processing unit 108 discards the probabilistic matches corresponding to less than 40 positive matches.
 Further, the pre-defined duration of media content corresponding to the positive match has a first limiting duration bounded by a second limiting 15 duration. In an embodiment of the present disclosure, the first limiting duration is 10 seconds and the second limiting duration is 25 seconds. In another embodiment of the present disclosure, the first limiting duration is 10 seconds and the second limiting duration is 35 seconds. In yet another embodiment of the present disclosure, the first limiting duration is 10 seconds and the second limiting 20 duration is 60 seconds. In yet another embodiment of the present disclosure, the first limiting duration is 10 seconds and the second limiting duration is 90 seconds. In yet another embodiment of the present disclosure, the first limiting duration and the second limiting duration may have any suitable limiting durations. 25
 In an example, suppose 100 digital signature values from 1100th prominent frame to 1200th prominent frame gives a positive match with a stored 100th frame to 200th frame in the first database 106a. The first processing unit 106 checks whether the number of positive matches is in the pre-defined range of 30 positive matches. In addition, the first processing unit 106 checks whether the
Page 27 of 52
positive matches correspond to media content is in the first limiting duration and the second limiting duration. Moreover, the first processing unit 108 checks whether the positive matches of 100 digital signature values for unsupervised detection of the one or more advertisements is in a required sequence and order.
5
 The first processing unit 108 checks for the degree of match of the pre-defined range of number of bits of the first pre-defined number of signature values. In an example, the degree of match of 640 bits (32 Bits X 20 digital signature values) of the generated set of digital signature values with stored 640 digital signature values is 620 bits. In such case, the first processing unit 108 10 flags the probabilistic match as the positive match. In another example, the degree of match of 640 bits of the generated set of digital signature values with stored 640 digital signature values is 599 bits. In such case, the first processing unit 108 flags the probabilistic match as the negative match. In an embodiment of the present disclosure, the pre-defined range of number of bits is 0-40. 15
 Furthermore, the first processing unit 108 performs a range based matching of the digital signature values across the channels of the plurality of channels 108. In an example, a first channel S displays an ad in one or more slots. A second channel T displays the same ad in one or more slots. Here, the one or 20 more slots for the first channel S may differ from the one or more ads in the second channel A. The first channel S display the ad with a corresponding channel logo overlaid and the second channel T displays the same ad with the corresponding channel logo and a dynamically changing ticker in a relatively smaller area of the frame positioned specifically. The first processing unit 108 25 trims the pre-defined percentage of area in each frame corresponding to the one or more ads broadcasted on the first channel S and the second channel T. In addition, the first processing unit 108 probabilistically matches each prominent frame having the specific cropped area for the ad broadcasted on the first channel S with the each prominent frame having the specific cropped area for ad 30 broadcasted in the second channel T. Moreover, the first processing unit 108
Page 28 of 52
treats the one or more ad broadcasted across the channel of the one or more channels 104 as a single ad based on positive matching results.
 Further, the first processing unit 108 generates one or more prominent frequencies and one or more prominent amplitudes from extracted first 5 set of audio fingerprints. The first processing unit 108 fetches a sample rate of first set of audio fingerprints. The sample rate is divided by a pre-defined bin size set for the audio. The division of the sample rate by the pre-defined bin size provides the data point. Further, the first processing unit 108 performs fast fourier transform (hereinafter “FFT”) on each bin size of the audio to obtain the 10 one or more prominent frequencies and the one or more prominent amplitudes. The first processing unit 108 compares the one or more prominent frequencies and the one or more prominent amplitudes with a stored one or more prominent frequencies and a stored one or more prominent amplitudes.
15
 Going further, the first processing unit 108 fetches the corresponding video and audio clip associated to the probabilistically matched digital signature values. The first database 108a and the first processing unit 108 are associated with an administrator 112. The administrator 112 is associated with a display device and a control and input interface. In addition, the display device is 20 configured to display a graphical user interface (hereinafter “GUI”) of an installed operating system. The administrator 112 checks for the presence of the audio and the video clip manually in the master database 114. The administrator 112 decides whether the audio clip and the video clip correspond to a new advertisement. The administrator 112 tags each audio clip and the video clip with 25 a tag. The tag corresponds to a brand name associated with a detected advertisement. Moreover, the administrator 112 stores the metadata of the probabilistically matched digital fingerprint values in the master database 114.
 In an embodiment of the present disclosure, the first processing unit 30 108 extracts the first set of audio fingerprints and the first set of video fingerprints
Page 29 of 52
corresponding to another channel. The first processing unit 108 extracts the pre-defined number of prominent frames and generates pre-defined number of digital signature values. The first processing unit 108 performs the temporal recurrence algorithm to detect a new advertisement. In an embodiment of the present disclosure, the first processing unit 108 generates prominent frequencies and 5 prominent amplitudes of the audio. In another embodiment of the present disclosure, the first processing unit 108 discards the audio from the media content. In an embodiment of the present disclosure, the first processing unit 108 probabilistically matches the one or more prominent frequencies and the one or more prominent amplitudes with stored prominent frequencies and stored 10 prominent amplitudes in the first database. The stored prominent frequencies and the stored prominent amplitudes correspond to a regional channel having audio in the pre-defined regional language or standard language. In an embodiment of the present disclosure, the standard language is English. In another embodiment of the present disclosure, the first processing unit 108 gives precedence to results of 15 probabilistic match of video fingerprints than to the audio fingerprints. In an embodiment of the present disclosure, the administrator 112 manually tags the detected advertisement broadcasted in the pre-defined regional language or the standard language. In another embodiment of the present disclosure, the advertisement detection system 106 automatically tags the detected advertisement 20 broadcasted in the pre-defined regional language or the standard language.
 In addition, the first processing unit 108 reports a positively matched digital signature values corresponding to each detected advertisement in a reporting database present in the first database 108a. The first processing unit 108 25 discards any detected advertisement already reported in the reporting database.
 The second processing unit 110 includes a second central processing unit and associated peripherals for supervised detection of the one or more advertisements (also shown in FIG. 1C). The second processing unit 110 30 performs normalization, scaling and trimming of each frame of the media content
Page 30 of 52
for removal of channel logos and tickers. The second processing unit 110 is connected to a second database 110a. The second processing unit 110 is programmed to perform the extraction of the first set of audio fingerprints and the first set of video fingerprints corresponding to a normalized and scaled media content broadcasted on the channel. The first set of video fingerprints and the 5 first set of audio fingerprints are extracted sequentially in the real time. The extraction of the first set of video fingerprints is done by sequentially extracting the one or more prominent fingerprints corresponding to the one or more prominent frames for the pre-defined interval of broadcast.
10
 Furthermore, each of the one or more prominent fingerprints corresponds to the prominent frame having sufficient contrasting features compared to the adjacent prominent frame. For example, let us suppose that the second processing unit 110 selects 6 prominent frames per second from 25 frames per second. Each pair of adjacent frames of the 6 prominent frames will have 15 evident contrasting features. The second processing unit 110 generates the set of digital signature values corresponding to the extracted set of video fingerprints. The second processing unit 110 generates each digital signature value of the set of digital signature values by dividing each prominent frame of the one or more prominent frames into the pre-defined number of blocks. In an embodiment of 20 the present disclosure, the predefined number of block is 15 (4X4). In another embodiment of the present disclosure, the pre-defined number of blocks is any suitable number. Each block of the pre-defined number of blocks has the pre-defined number of pixels. Each pixel is fundamentally the combination of R, G and B colors. The colors are collectively referred to as RGB. Each color of the 25 pixel (RGB) has the pre-defined value in the pre-defined range of values. The predefined range of values is 0-255.
 The second processing unit 110 gray-scales each block of each prominent frame of the one or more prominent frames. The second processing 30 unit 110 calculates the first bit value and the second bit value for each block of the
Page 31 of 52
prominent frame. The first bit value and the second bit value are calculated from comparison of the mean and the variance for the pre-defined number of pixels with the corresponding mean and variance for the master frame. The master frame is present in the master database 114. The second processing unit 110 assigns the first bit value and the second bit with the binary 0 when the mean and 5 the variance for each block is less the corresponding mean and variance of each master frame. The second processing unit 110 assigns the first bit value and the second bit value with the binary 1 when the mean and the variance for each block is greater than the corresponding mean and variance of each master frame.
10
 The second processing unit 110 obtains the 32 bit digital signature value corresponding to each prominent frame. The 32 bit digital signature value is obtained by sequentially arranging the first bit value and the second bit value for each block of the pre-defined number of blocks of the prominent frame. The second processing unit 110 stores each digital signature value corresponding to 15 each prominent frame of the one or more prominent frames in the second database 110a. The digital signature value corresponds to the one or more programs and the one or more advertisements.
 The second processing unit 110 performs the supervised detection of 20 the one or more advertisements. The second processing unit 110 probabilistically matches a second pre-defined number of digital signature values with the stored set of digital signature values present in the master database 114. The second pre-defined number of digital signature values corresponds to the second pre-defined number of prominent frames of the real time broadcasted media content. The 25 probabilistic match is performed for the set of digital signature values by utilizing a sliding window algorithm. The second processing unit 110 determines the positive match in the probabilistically matching of the second pre-defined number of digital signature values with the stored set of digital signature values. The stored set of digital signature values is present in the master database 114. In an 30 embodiment of the present disclosure, the second pre-defined number of digital
Page 32 of 52
signature values of the set of digital signature values for the supervised detection of the one or more advertisements is 6. In another embodiment of the present disclosure, the second pre-defined number of digital signature values is selected based on optimal processing capacity and performance of the second processing unit 110. 5
 In an example, let us suppose that the second processing unit 108 stores 300 digital signature values corresponding to 300 prominent frames in the second database 108a for 10 seconds of the media content. The second processing unit 108 probabilistically matches 6 digital signature values 10 corresponding to 101st to 107nth prominent frame with each 6 digital signature values corresponding to 300 previously stored prominent frames. The 300 previously stored prominent frames are present in the master database 112.
 In another example, suppose 300 digital signature values from 600th 15 prominent frame to 900th prominent frame gives a positive match with a stored 150th frame to 450th frame in the master database 114. The second processing unit 110 checks whether the number of positive matches is in the pre-defined range of positive matches and the positive matches correspond to media content in the first limiting duration and the second limiting duration. In addition, the 20 second processing unit 110 checks whether the positive matches of 300 digital signature values for supervised detection of the one or more advertisements is in the required sequence and order.
 The second processing unit 110 checks for the degree of match of the 25 pre-defined range of number of bits of the second pre-defined number of signature values. In an example, the degree of match of 192 bits of the generated set of digital signature values with stored 192 digital signature values is 185 bits. In such case, the second processing unit 110 flags the probabilistic match as the positive match. In another example, the degree of match of 192 bits of the 30 generated set of digital signature values with stored 192 digital signature values is
Page 33 of 52
179 bits. In such case, the second processing unit 110 flags the probabilistic match as the negative match. In an embodiment of the present disclosure, the pre-defined range of number of bits is 0-12.
 The second processing unit 110 compares the one or more prominent 5 frequencies and the one or more prominent amplitudes with the stored one or more prominent frequencies and the stored one or more prominent amplitudes. The one or more prominent frequencies and the one or more prominent amplitudes corresponding to the extracted first set of audio fingerprints. In an embodiment of the present disclosure, the administrator 112 manually checks 10 whether each supervised advertisement detected is an advertisement or a program.
 In an embodiment of the present disclosure, the advertisement detection system 106 reports a frequency of each advertisement broadcasted for a first time and a frequency of each advertisement broadcasted repetitively. In 15 another embodiment of the present disclosure, the administrator 112 reports the frequency of each advertisement broadcasted for the first time and the frequency of each advertisement broadcasted repetitively.
 In an embodiment of the present disclosure, the second processing 20 unit 110 extracts the first set of audio fingerprints and the first set of video fingerprints corresponding to another channel. The second processing unit 110 extracts the pre-defined number of prominent frames and generates pre-defined number of digital signature values. The second processing unit 110 performs probabilistic matching of digital signature values corresponding to the video with 25 the stored digital signature values in the master database 114 detect a repeated advertisement. In an embodiment of the present disclosure, the second processing unit 110 generates the one or more prominent frequencies and the one or more prominent amplitudes of the audio. In another embodiment of the present disclosure, the second processing unit 110 discards the audio from the media 30
Page 34 of 52
content. In an embodiment of the present disclosure, the master database 114 includes the one or more advertisements corresponding to a same advertisement in every regional language. In another embodiment of the present disclosure, the master database 114 includes the advertisement in a specific national language. In embodiment of the present disclosure, the second processing unit 110 5 probabilistically matches the one or more prominent frequencies and the one or more prominent amplitudes with stored prominent frequencies and stored prominent amplitudes. The stored prominent frequencies and the stored prominent amplitudes correspond to a regional channel having audio in the pre-defined regional language or standard language in the master database 114. In an 10 embodiment of the present disclosure, the standard language is English. In another embodiment of the present disclosure, the second processing unit 110 gives precedence to results of probabilistic match of video fingerprints than to the audio fingerprints.
15
 Further, the master database 114 is present in a master server. The master database 114 includes a plurality of digital video and audio fingerprint records and every signature value corresponding to each previously detected and newly detected advertisement. The master database 114 is connected to the advertisement detection system 106. In an embodiment of the present disclosure, 20 the master server is present in a remote location. In another embodiment of the present disclosure, the master server is present locally with the advertisement detection system 106.
 Further, the advertisement detection system 106 stores the generated 25 set of digital signature values, the first set of audio fingerprints and the first set of video fingerprints in the first database 108a and the second database 110a. Furthermore, the advertisement detection system 106 updates the first metadata manually in the master database 114 for the unsupervised detection of the one or more advertisements. The first metadata includes the set of digital signature 30 values and the first set of video fingerprints.
Page 35 of 52
 It may be noted that in FIG. 1A, FIG. 1B and FIG. 1C, the system 100 includes the broadcast reception device 102 for decoding one channel; however, those skilled in the art would appreciate the system 100 includes more number of broadcast reception devices for decoding more number of channels. It 5 may be noted that in FIG. 1A, FIG. 1B and FIG. 1C, the system 100 includes the advertisement detection system 106 for the supervised and the unsupervised detection of the one or more advertisement corresponding to one channel; however, those skilled in the art would appreciate that the advertisement detection system 106 detects the one or more advertisements corresponding to more number 10 of channels. It may be noted that in FIG. 1A, FIG. 1B and FIG. 1C, the administrator 112 manually checks each newly detected advertisement in the master database 114; however, those skilled in the art would appreciate that the advertisement detection system 106 automatically checks for each advertisement in the master database 114. 15
 FIG. 2 illustrates a block diagram 200 of the advertisement detection system 106, in accordance with various embodiments of the present disclosure. It may be noted that to explain the system elements of the FIG. 2, references will be made to the system elements of the FIG. 1A, FIG. 1B and FIG. 1C. The block 20 diagram 200 describes the advertisement detection system 106 configured for the unsupervised and the supervised detection of the one or more advertisements.
 The block diagram 200 of the advertisement detection system 106 includes a reception module 202, a normalization module 204, a derivation 25 module 206, a trimming module 208, an extraction module 210 and a generation module 212. In addition, the block diagram 200 of the advertisement detection system 106 includes a storage module 214, a detection module 216 and an updating module 218. The reception module 202 receives the live feed associated with a media content broadcasted on the channel in the real time (as discussed 30
Page 36 of 52
above in the detailed description of FIG. 1A). The normalization module 204 normalizes each frame of the video corresponding to the media content broadcasted on the channel. The normalization module normalizes each frame based on the histogram normalization and the histogram equalization (as described above in the detailed description of FIG. 1A). 5
 The derivation module 206 derives the one or more characteristics corresponding to the one or more features associated with the media content for each channel of the plurality of channels. The one or more characteristics includes the first set of characteristics associated with the logo of the channel and 10 the second set of characteristics associated with the ticker displayed on the channel (as discussed above in the detailed description of FIG. 1A). The trimming module 208 trims the pre-defined percentage of area in each frame of the media content. The trimming module 208 trims based on the one or more characteristics corresponding to the one or more features associated with the 15 media content (as stated above in the detailed description of FIG. 1A).
 The extraction module 210 extracts the first set of audio fingerprints and the first set of video fingerprints corresponding to the media content broadcasted in the specific cropped area of the channel. The first set of audio 20 fingerprints and the first set of video fingerprints are extracted sequentially in the real time (as shown in detailed description of FIG. 1A). Further, the generation module 212 generates the set of digital signature values corresponding to the extracted set of video fingerprints. The generation module 212 generates each digital signature value of the set of digital signature values by dividing and 25 grayscaling each prominent frame into the pre-defined number of blocks. Further, the generation module 212 calculates and obtains each digital signature value corresponding to each block of the prominent frame (as shown in detailed description of FIG. 1A).
30
Page 37 of 52
 Furthermore, the generation module 212 includes a division module 212a, a grayscaling module 212b, a calculation module 212c and an obtaining module 212d. The division module 212a divides each prominent frame of the one or more prominent frames into the pre-defined number of blocks (as shown in detailed description of FIG. 1A). The grayscaling module 212b grayscales each 5 block of each prominent frame of the one or more prominent frames. The calculation module 212c calculates the first bit value and the second bit value for each block of the prominent frame (as described in the detailed description of FIG. 1A). The obtaining module 212d obtains the 32 bit digital signature value corresponding to each prominent frame (as described in detailed description of 10 FIG. 1A).
 The storage module 214 stores the generated set of digital signature values, the first set of audio fingerprints and the first set of video fingerprints in the first database 108a and the second database 110a (as described above in 15 detailed description of FIG. 1A). Further, the detection module 216 detects the one or more advertisements broadcasted on the channel. The detection module 216 includes an unsupervised detection module 216a and the supervised detection module 216b. The unsupervised detection module 216a detects the new advertisement through unsupervised machine learning (as discussed in the 20 detailed description of FIG. 1A and FIG. 1B). The unsupervised detection module 216a probabilistically matches the first pre-defined number of digital signature values corresponding to the pre-defined number of prominent frames with the stored set of digital signature values (as described in detailed description of FIG. 1A). 25
 Furthermore, the unsupervised detection module 216a compares the one or more prominent frequencies and the one or more prominent amplitudes of the extracted first set of audio fingerprints (as described in detailed description of FIG. 1A). In addition, the unsupervised detection module 216a determines the 30 positive probabilistic match of the pre-defined number of prominent frames based
Page 38 of 52
on the pre-defined condition (as described in the detailed description of FIG. 1A). Moreover, the unsupervised detection module 216a fetches the video and the audio clip corresponding to the probabilistically matched digital signature values (as described in the detailed description of FIG. 1A). In addition, the unsupervised detection module 216a checks presence of the audio and the video 5 clip manually in the master database 112 (as described in detailed description of FIG. 1A). Furthermore, the unsupervised detection module 216a reports the positively matched digital signature values corresponding to the advertisement of the one or more advertisements in the reporting database present in the first database 108a (as described in the detailed description of FIG. 1A). 10
 The supervised detection module 216b detects the advertisements broadcasted previously during the broadcasting of the media content (as described above in the detailed description of FIG. 1A and FIG. 1C). The supervised detection module 216b probabilistically matches the second pre-defined number 15 of digital signature values with the stored set of digital signature values present in the master database 114 (as described above in the detailed description of FIG. 1A). Further, the supervised detection module 216b compares the one or more prominent frequencies and the one or more prominent amplitudes with the stored one or more prominent frequencies and the stored one or more prominent 20 amplitudes (as described in the detailed description of FIG. 1A). The supervised detection module 216b determines the positive match in the probabilistically matching of the second pre-defined number of digital signature values with the stored set of digital signature values in the master database 114. In addition, the supervised detection module 216b determines the positive match from the 25 comparison of the one or more prominent frequencies with the stored one or more prominent frequencies (as described in the detailed description of FIG. 1A).
 Going further, the updating module 218 updates the first metadata manually in the master database 114 for the unsupervised detection of the one or 30 more advertisements. The first metadata includes the set of digital signature
Page 39 of 52
values and the first set of video fingerprints corresponding to the detected advertisement (as described in the detailed description of FIG. 1A).
 FIG. 3 illustrates a flow chart 300 for channel feature agnostic detection of the one or more advertisements across channels, in accordance with 5 various embodiments of the present disclosure. It may be noted that to explain the process steps of the flowchart 300, references will be made to the system elements of the FIG. 1A, FIG. 1B, FIG. 1C and FIG. 2.
 The flowchart 300 initiates at step 302. At step 304, the derivation 10 module 206 derives the one or more characteristics corresponding to the one or more features associated with the media content for each channel of the plurality of channels. Further, at step 306, the trimming module 208 trims the pre-defined percentage of area in each frame of the media content. The pre-defined percentage of area is trimmed based on the one or more characteristics 15 corresponding to the one or more features associated with the media content. Further, at step 308, the detection module 216 detects the one or more advertisements broadcasted across the plurality of channels in the real time. The flow chart 300 terminates at step 310.
20
 It may be noted that the flowchart 300 is explained to have above stated process steps; however, those skilled in the art would appreciate that the flowchart 300 may have more/less number of process steps which may enable all the above stated embodiments of the present disclosure.
25
 FIG. 4 illustrates a block diagram of a communication device 400, in accordance with various embodiments of the present disclosure. The communication device 400 enables host process of the advertisement detection system 106. The communication device 400 includes a control circuitry module 402, a storage module 404, an input/output circuitry module 406, and a 30
Page 40 of 52
communication circuitry module 408. The communication device 400 includes any suitable type of portable electronic device. The communication device 400 includes but may not be limited to a personal e-mail device (e.g., a Blackberry.TM. made available by Research in Motion of Waterloo, Ontario), a personal data assistant ("PDA"), a cellular telephone. In addition, the 5 communication device 400 includes a smartphone, the laptop, computer and the tablet. In another embodiment of the present disclosure, the communication device 400 can be a desktop computer.
 From the perspective of this disclosure, the control circuitry module 10 402 includes any processing circuitry or processor operative to control the operations and performance of the communication device 400. For example, the control circuitry module 402 may be used to run operating system applications, firmware applications, media playback applications, media editing applications, or any other application. 15
 In an embodiment of the present disclosure, the control circuitry module 402 drives a display and process inputs received from the user interface. From the perspective of this disclosure, the storage module 404 includes one or more storage mediums. The one or more storage medium includes a hard-drive, 20 solid state drive, flash memory, permanent memory such as ROM, any other suitable type of storage component, or any combination thereof. The storage module 404 may store, for example, media data (e.g., music and video files), application data (e.g., for implementing functions on the communication device 400). 25
 From the perspective of this disclosure, the I/O circuitry module 406 may be operative to convert (and encode/decode, if necessary) analog signals and other signals into digital data. In an embodiment of the present disclosure, the I/O circuitry module 406 may convert the digital data into any other type of signal and 30
Page 41 of 52
vice-versa. For example, the I/O circuitry module 406 may receive and convert physical contact inputs (e.g., from a multi-touch screen), physical movements (e.g., from a mouse or sensor), analog audio signals (e.g., from a microphone), or any other input. The digital data may be provided to and received from the control circuitry module 402, the storage module 404, or any other component of 5 the communication device 400.
 It may be noted that the I/O circuitry module 406 is illustrated in FIG. 4 as a single component of the communication device 400; however those skilled in the art would appreciate that several instances of the I/O circuitry module 406 10 may be included in the communication device 400.
 The communication device 400 may include any suitable interface or component for allowing the user to provide inputs to the I/O circuitry module 406. The communication device 400 may include any suitable input mechanism. 15 Examples of the input mechanism include but may not be limited to a button, keypad, dial, a click wheel, and a touch screen. In an embodiment, the communication device 400 may include a capacitive sensing mechanism, or a multi-touch capacitive sensing mechanism.
20
 In an embodiment of the present disclosure, the communication device 400 may include specialized output circuitry associated with output devices such as, for example, one or more audio outputs. The audio output may include one or more speakers built into the communication device 400, or an audio component that may be remotely coupled to the communication device 400. 25
 The one or more speakers can be mono speakers, stereo speakers, or a combination of both. The audio component can be a headset, headphones or ear buds that may be coupled to the communication device 400 with a wire or wirelessly. 30
Page 42 of 52
 In an embodiment, the I/O circuitry module 406 may include display circuitry for providing a display visible to a user. For example, the display circuitry may include a screen (e.g., an LCD screen) that is incorporated in the communication device 400. 5
 The display circuitry may include a movable display or a projecting system for providing a display of content on a surface remote from the communication device 400 (e.g., a video projector). In an embodiment of the present disclosure, the display circuitry may include a coder/decoder to convert 10 digital media data into the analog signals. For example, the display circuitry may include video Codecs, audio Codecs, or any other suitable type of Codec.
 The display circuitry may include display driver circuitry, circuitry for driving display drivers or both. The display circuitry may be operative to 15 display content. The display content can include media playback information, application screens for applications implemented on the electronic device, information regarding ongoing communications operations, information regarding incoming communications requests, or device operation screens under the direction of the control circuitry module 402. Alternatively, the display circuitry 20 may be operative to provide instructions to a remote display.
 In addition, the communication device 400 includes the communication circuitry module 408. The communication circuitry module 408 may include any suitable communication circuitry operative to connect to a 25 communication network. In addition, the communication circuitry module 408 may include any suitable communication circuitry to transmit communications (e.g., voice or data) from the communication device 400 to other devices. The other devices exist within the communications network. The communications circuitry 408 may be operative to interface with the communication network 30
Page 43 of 52
through any suitable communication protocol. Examples of the communication protocol include but may not be limited to Wi-Fi, Bluetooth RTM, radio frequency systems, infrared, LTE, GSM, GSM plus EDGE, CDMA, and quadband.
5
 In an embodiment, the communications circuitry module 408 may be operative to create a communications network using any suitable communications protocol. For example, the communication circuitry module 408 may create a short-range communication network using a short-range communications protocol to connect to other devices. For example, the communication circuitry module 10 408 may be operative to create a local communication network using the Bluetooth, RTM protocol to couple the communication device 400 with a Bluetooth, RTM headset.
 It may be noted that the computing device is shown to have only one 15 communication operation; however, those skilled in the art would appreciate that the communication device 400 may include one more instances of the communication circuitry module 408 for simultaneously performing several communication operations using different communication networks. For example, the communication device 400 may include a first instance of the 20 communication circuitry module 408 for communicating over a cellular network, and a second instance of the communication circuitry module 408 for communicating over Wi-Fi or using Bluetooth RTM.
 In an embodiment of the present disclosure, the same instance of the 25 communications circuitry module 408 may be operative to provide for communications over several communication networks. In another embodiment of the present disclosure, the communication device 400 may be coupled to a host device for data transfers and sync of the communication device 400. In addition, the communication device 400 may be coupled to software or firmware updates to 30
Page 44 of 52
provide performance information to a remote source (e.g., to providing riding characteristics to a remote server) or performing any other suitable operation that may require the communication device 400 to be coupled to the host device. Several computing devices may be coupled to a single host device using the host device as a server. Alternatively or additionally, the communication device 400 5 may be coupled to the several host devices (e.g., for each of the plurality of the host devices to serve as a backup for data stored in the communication device 400).
 The present disclosure has numerous disadvantages over the prior art. 10 The present disclosure provides a novel method to detect any new advertisement running for the first time on any television channel. The advertisements are detected robustly and dedicated supervised and unsupervised central processing unit (hereinafter “CPU”) are installed. Further, the present disclosure provides a method and system that is economic and provides high return of investment. The 15 detection of each repeated advertisement on supervised CPU and each new advertisement on unsupervised CPU significantly saves processing power and saves significant time. The disclosure provides a cost efficient solution to a scaled mapping and database for advertisement broadcast.
20
 The foregoing descriptions of specific embodiments of the present technology have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present technology to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order 25 to best explain the principles of the present technology and its practical application, to thereby enable others skilled in the art to best utilize the present technology and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstance may suggest or 30
Page 45 of 52
render expedient, but such are intended to cover the application or implementation without departing from the spirit or scope of the claims of the present technology.
 While several possible embodiments of the invention have been described above and illustrated in some cases, it should be interpreted and 5 understood as to have been presented only by way of illustration and example, but not by limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments.
Page 46 of 52
CLAIMS
We claim:
1. A computer-implemented method for standardizing media content for channel agnostic detection of television advertisements, the computer-implemented method comprising: 5
deriving, with a processor, one or more characteristics corresponding to one or more features associated with the media content broadcasted on each channel of a plurality of channels;
trimming, with the processor, a pre-defined percentage of area in each frame of the media content based on the one or more characteristics 10 corresponding to the one or more features associated with the media content; and
detecting, with the processor, the one or more advertisements broadcasted across the plurality of channels in the real time.
15
2. The computer-implemented method as recited in claim 1, wherein the one or more features associated with each channel comprises a logo associated with each channel and a ticker associated with each channel.
3. The computer-implemented method as recited in claim 1, wherein the one 20 or more characteristics comprises a first set of characteristics associated with the logo of each channel and a second set of characteristics associated with the ticker associated with each channel, wherein the first set of characteristics comprises a pre-defined height of the logo, a pre-defined width of the logo and a pre-defined position of the logo and wherein the 25 second set of characteristics comprises a pre-defined height of the ticker, a pre-defined width of the ticker and a pre-defined position of the ticker.
4. The computer-implemented method as recited in claim 1, wherein the pre-defined percentage of area in each frame being trimmed to a pre-defined 30 scale and wherein the pre-defined scale of each frame being 640 x 480.
Page 47 of 52
5. The computer-implemented method as recited in claim 1, wherein the pre-defined percentage of area being 30 percent.
6. The computer-implemented method as recited in claim 1, further 5 comprising normalizing, with the processor, each frame of a video corresponding to the broadcasted media content on each channel, wherein the normalization of each frame being done based on a histogram normalization and a histogram equalization and wherein the normalization of each frame being done by adjusting luminous intensity value of each 10 pixel to a desired luminous intensity value.
7. The computer-implemented method as recited in claim 1, further comprising extracting, with the processor, a first set of audio fingerprints and a first set of video fingerprints corresponding to the media content 15 broadcasted on each channel, wherein the first set of audio fingerprints and the first set of video fingerprints being extracted sequentially in the real time, wherein the extraction of the first set of video fingerprints being done by sequentially extracting one or more prominent fingerprints corresponding to one or more prominent frames of a pre-defined number 20 of frames present in the media content for a pre-defined interval of broadcast.
8. The computer-implemented method as recited in claim 1, further comprising generating, with the processor, a set of digital signature values 25 corresponding to the extracted set of video fingerprints, wherein the generation of each digital signature value of the set of digital signature values being done by: dividing each prominent frame of the one or more prominent frames into a pre-defined number of blocks, wherein each block of the 30 pre-defined number of block having a pre-defined number of pixels;
Page 48 of 52
grayscaling each block of each prominent frame of the one or more prominent frames; calculating a first bit value and a second bit value for each block of the prominent frame, wherein the first bit value and the second bit value being calculated from comparing a mean and a variance for the pre-5 defined number of pixels in each block of the prominent frame with a corresponding mean and variance for a master frame in a master database; and obtaining a 32 bit digital signature value corresponding to each prominent frame, wherein the 32 bit digital signature value being obtained 10 by sequentially arranging the first bit value and the second bit value for each block of the pre-defined number of blocks of the prominent frame. 9. The computer-implemented method as recited in claim 8, wherein the first bit value and the second bit value being assigned a binary 0 when the 15 mean and the variance for each block of the prominent frame being less the corresponding mean and variance of each master frame.
10. The computer-implemented method as recited in claim 8, wherein the first bit value and the second bit value being assigned a binary 1 when the 20 mean and the variance for each block of the prominent frame being greater than the corresponding mean and variance of each master frame.
11. The computer-implemented method as recited in claim 1, wherein the detection of the one or more advertisement being a supervised 25 advertisement detection and an unsupervised advertisement detection.
12. The computer-implemented method as recited in claim 11, wherein the unsupervised detection of the one or more advertisements being done by: probabilistically matching a first pre-defined number of digital 30 signature values corresponding to a pre-defined number of prominent
Page 49 of 52
frames of a real time broadcasted media content with a stored set of digital signature values present in a first database, wherein the probabilistic matching being performed for the set of digital signature values by utilizing a sliding window algorithm; comparing one or more prominent frequencies and one or more 5 prominent amplitudes of the extracted first set of audio fingerprints; determining a positive probabilistic match of the pre-defined number of prominent frames based on a pre-defined condition; fetching a video and an audio corresponding to probabilistically matched digital signature values; and 10 checking presence of the audio and the video manually in the master database; and reporting a positively matched digital signature values corresponding to an advertisement of the one or more advertisements in a reporting database present in the first database. 15
13. The computer-implemented method as recited in claim 12, wherein the pre-defined condition comprises a pre-defined range of positive matches corresponding to probabilistically matched digital signature values, a pre-defined duration of media content corresponding to the positive match, a 20 sequence and an order of the positive matches and a degree of match of a pre-defined range of number of bits of the first pre-defined number of signature values.
14. The computer-implemented method as recited in claim 1, further 25 comprising storing, with the processor, the derived one or more characteristics associated with the one or more features associated with the channel, the first set of audio fingerprints, the first set of video fingerprints and the set of digital signature values corresponding to the extracted first set of video fingerprints and wherein the storing being done in the first 30 database and a second database.
Page 50 of 52
15. The computer-implemented method as recited in claim 1, further comprising, updating, with the processor, the derived one or more characteristics of the one or more features associated with each channel, the first set of audio fingerprints, the first set of video fingerprints and the 5 set of digital signature values for the detected one or more advertisements in the master database.
16. The computer-implemented method as recited in claim 10, wherein the supervised detection of the one or more advertisements being done by: 10 probabilistically matching a second pre-defined number of digital signature values corresponding to a pre-defined number of prominent frames of the real time broadcasted media content with a stored set of digital signature values present in the master database, wherein the probabilistic matching being performed for the set of digital signature 15 values by utilizing the sliding window algorithm; comparing the one or more prominent frequencies and the one or more prominent amplitudes corresponding to the extracted first set of audio fingerprints with a stored one or more prominent frequencies and a stored one or more prominent amplitudes; and 20
determining a positive match in the probabilistically matching of the second pre-defined number of digital signature values with the stored set of digital signature values in the master database and comparing of the one or more prominent frequencies and the one or more prominent amplitudes corresponding to the extracted first set of audio fingerprints 25 with the stored one or more prominent frequencies and the stored one or more prominent amplitudes.

Documents

Application Documents

# Name Date
1 Form 5 [09-03-2016(online)].pdf 2016-03-09
2 Form 3 [09-03-2016(online)].pdf 2016-03-09
3 Drawing [09-03-2016(online)].pdf 2016-03-09
4 Description(Complete) [09-03-2016(online)].pdf 2016-03-09
5 abstract.jpg 2016-07-14
6 201611008281-GPA-(19-07-2016).pdf 2016-07-19
7 201611008281-Form-1-(19-07-2016).pdf 2016-07-19
8 201611008281-Correspondence Others-(19-07-2016).pdf 2016-07-19
9 Form 26 [11-04-2017(online)].pdf 2017-04-11
10 201611008281-Power of Attorney-130417.pdf 2017-04-16
11 201611008281-Correspondence-130417.pdf 2017-04-16
12 201611008281-Proof of Right (MANDATORY) [15-11-2017(online)].pdf 2017-11-15
13 201611008281-OTHERS-161117.pdf 2017-11-24
14 201611008281-Correspondence-161117.pdf 2017-11-24