Sign In to Follow Application
View All Documents & Correspondence

Method And System For Perceptual Measurement Of Blockiness In Digital Content

Abstract: A method for managing quality of digital content is presented. Horizontal and vertical line map images are generated from an image retrieved from the digital content. Blockiness corners are detected in the image based on the horizontal and vertical line map images. A first set of blockiness corners determined to be generated due to one or more false positive causal attributes is identified from the detected blockiness corners. A second set of blockiness corners is identified as blockiness artifacts from the detected blockiness corners, where the second set is different from the first set and a total number of the second set is greater than a designated threshold. Each corner in the second set is classified into one or more types of blockiness artifacts. Information corresponding to the image, the second set of corners, and/or corresponding types of blockiness artifacts is transmitted to a quality management system (206).

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
08 September 2016
Publication Number
10/2018
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2023-10-16
Renewal Date

Applicants

TATA ELXSI LIMITED
ITPB Road, Whitefield, Bangalore

Inventors

1. MOHAMMED JASIR CHEEROTH
Tata Elxsi Limited ITPB Road, Whitefield, Bangalore – 560048 India
2. RAKHIL KALARITHOTTATHIL
Tata Elxsi Limited ITPB Road, Whitefield, Bangalore – 560048 India

Specification

DESC:FIELD OF THE INVENTION

[0001] The present disclosure relates generally to processing of digital content, and more particularly to a method and system for detection, quantification, and classification of blockiness in digital video content.

DESCRIPTION OF RELATED ART

[0002] Consumption of digital content has increased manifold in recent times due to seamless connectivity to the internet. Consumers use various computing devices such as computers, mobile phones, smart televisions, and set-top boxes among other devices to access digital content. Examples of digital content include digital images, digital video and audio, and related media. The digital content may be transmitted either over digital video broadcast (DVB) networks or via streaming. For cost-effective and efficient transmission of the digital content over DVB networks, various encoding and compression techniques are used. Compression techniques reduce the size of the digital content, prior to transmission, so that the digital content is transmitted using reduced bandwidth and consumes less storage space. Moving Pictures Expert Group (MPEG)-2, H.264/MPEG-4, High Efficiency Video Coding (HEVC), Theora, RealVideo, and AOMedia Video 1 (AV1) are examples of video compression techniques that may be used to process digital content for transmission. Upon reception, the digital content is decoded and is decompressed to generate the perceivable digital content at the aforementioned computing devices. However, compression techniques may introduce distortions in the digital content, thus reducing the quality of the digital content. By way of example, popular video compression techniques such as MPEG-2 and H.264/MPEG4 Advanced Video Coding (AVC) employ block motion compensation (BMC) to reduce the size of digital video files for cost-effective transmission of the digital video files and efficient utilization of bandwidth available for transmission. However, if the digital video file is over-compressed, the BMC technique may introduce visual distortions, referred to as blockiness or blocking, in the frame. Blocking artifacts enhance macroblocks, making the macroblock edges visible to the naked eye. The visible edges cause step-like discontinuities in the frame. Thus, blocking artifacts in any frame of the digital video file reduce the quality thereof. The perception of the viewer is significantly hindered when the number of blockiness artifacts in the frame is very high. Thus, it is essential to detect the presence of blockiness artifacts in digital video files.
[0003] Generally, blockiness artifacts are classified into two types: pixelation and macroblocking. Blockiness in the frame may be caused by macroblocking and pixelation. When the pixels in the frame of the digital video file become visible to the naked eye, the blockiness the frame is attributed to pixelation. When a few macroblocks of the digital video file are absent or have been replaced by other macroblocks, the blockiness in the frame is attributed to macroblocking.
[0004] A common method of eliminating blockiness artifacts is the use of de-blocking algorithms at a decoder level. However, many video files include fast moving sequences involving frames that change at a very high rate. In such a case, the decoder might fail to apply the de-blocking algorithms accurately, introducing blockiness. Therefore, certain alternate methods have been proposed to eliminate blockiness from video files.
[0005] One method is disclosed in US patent 8542751 B2. The patent describes a method that includes the steps of identifying all the edges in a frame of a video file, and detecting blockiness artifacts based on the identified edges. Certain objects including straight lines, such as bricks, meshes, and the like may contribute to the identified edges in the frame. Such detected edges, which are not caused due to the blockiness artifacts, are false positives and are undesirable. However, the patent does not disclose a mechanism for identification and elimination of such false positives. Further, in a quality-monitoring scenario, it may be desired to identify the type of blockiness artifact. However, the US patent 8542751 B2 does not disclose any method of classifying the blockiness artifacts into pixelation and macroblocking. Moreover, the method described in the patent does not provide a metric to quantify the blockiness in the video file.
[0006] Quantification and classification of blockiness artifacts in the video file are important for monitoring the quality of the digital content transmitted over DVB networks. Quantification provides a way of measuring the intensity of blockiness in the video file. Identifying the type and intensity of the blockiness in the video file becomes imperative for implementing corrective measures to eliminate the blockiness artifacts from the video file. Further, false positives contribute and increase the blockiness experienced by viewers. Hence, it is desirable to develop a method and system to detect, quantify, and classify the blockiness artifacts, overcome the shortcomings of the aforementioned methods, and improve the quality of viewer experience while viewing the digital content.

SUMMARY

[0007] An object of the current disclosure is to provide a method for managing quality of digital content. The method includes generating horizontal and vertical line map images from an image retrieved from the digital content. The method further includes detecting one or more blockiness corners in the image based on the horizontal and vertical line map images. The method includes identifying a first set of blockiness corners from the set of blockiness corners. The first set of blockiness corners is determined to be generated due to one or more false positive causal attributes. The one or more false positive causal attributes include objects such as an object comprising a ninety-degree corner, text, mesh, ladder, and brick. The method includes identifying a second set of blockiness corners from the set of blockiness corners as blockiness artifacts. The second set of blockiness corners is different from the first set of blockiness corners, and a total count of the second set of blockiness corners is greater than a designated threshold. The method includes classifying the blockiness artifacts into one or more types of blockiness artifacts. The one or more types of blockiness artifacts include pixelation and macroblocking. The method further includes transmitting information corresponding to at least one of the image, the second set of blockiness corners, and the corresponding types of blockiness artifacts to a quality management system.
[0008] Another object of the present disclosure is to provide a quality management system for digital content. The quality management system includes a processor and a memory. The memory stores the digital content. The processor receives the digital content, and detects a plurality of blockiness artifacts from an image retrieved from the digital content. The processor generates horizontal and vertical line map images from the image. The processor further detects one or more blockiness corners in the image based on the vertical and horizontal line map images. The processor identifies a first set of blockiness corners from the detected blockiness corners. The first set of blockiness corners is determined to be generated due to one or more false positive causal attributes. The one or more false positive causal attributes include objects in the image like text, mesh, bricks and ladder. The processor identifies a second set of blockiness corners from the detected blockiness corners as blockiness artifacts. The second set of blockiness corners is different from the first set of blockiness corners. A total number of blockiness corners of the second set of blockiness corners is greater than a designated threshold. The processor classifies the blockiness artifacts into one or more types of blockiness artifacts. The one or more types of blockiness artifacts include macroblocking and pixelation. The processor transmits information related to at least one of the image, the second set of blockiness corners, and the corresponding types of blockiness artifacts to an output device.

BRIEF DESCRIPTION OF DRAWINGS

[0009] The various features, aspects, and advantages of the present system and method are set forth with particularity in the appended claims. Embodiments of the present system and method will herein after be described in conjunction with the appended drawings provided to illustrate, and not to limit, the scope of the claims, wherein like designations denote like elements, and in which:
[0010] FIG. 1 illustrates a schematic block diagram of a digital video broadcasting network, in accordance with an embodiment of the present disclosure;
[0011] FIG. 2 illustrates a schematic block diagram of an exemplary quality management system, in accordance with an embodiment of the present disclosure;
[0012] FIG. 3 is a flowchart illustrating an exemplary method for detection and classification of blockiness artifacts in a video file, in accordance with an embodiment of the present disclosure;
[0013] FIG. 4 is a flowchart illustrating an exemplary method for detection and removal of letterboxes in a frame of the video file, in accordance with an embodiment of the present disclosure;
[0014] FIG. 5 is a flowchart illustrating an exemplary method for detecting blockiness corners in an image extracted from the video file, in accordance with an embodiment of the present disclosure;
[0015] FIGS. 6A-6B are flowcharts illustrating an exemplary method for detecting blockiness corners in the image that are generated due to text, in accordance with an embodiment of the present disclosure;
[0016] FIGS. 7A-7B are flowcharts illustrating an exemplary method for detecting mesh-like patterns in the image that correspond to false positive causal attributes, in accordance with an embodiment of the present disclosure; and
[0017] FIG.8 is a flowchart illustrating an exemplary method for classifying blockiness artifacts of the image into pixelation or macroblocking, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION OF THE EMBODIMENTS

[0018] The following description presents exemplary embodiments of a method and a quality management system for detection, classification and quantification of blockiness artifacts in digital content. The digital content includes, but is not limited to, digital videos, digital audio, Internet Protocol videos, digital images, and/or other digital data for use in various applications such as broadcast, over-the-top transmission (OTT), Internet Protocol Television (IPTV),connected vehicle navigation, telehealth, and/or telemedicine. Particularly, the embodiments described herein disclose exemplary methods and systems that detect both pixelation and macroblocking anomalies, while excluding features that resemble blockiness artifacts such as nets, bricks and other related features to prevent false-positive identification of anomalies. Additionally, the exemplary methods and systems described herein provide a single consolidated quality measure of digital content by considering and measuring blockiness present in the digital content.
[0019] It may be noted that the components and the method steps have been described in the present specification to show specific details that are pertinent for an understanding of the embodiments described herein. Furthermore, the various components and the method steps have been represented so as not to obscure the disclosure with details that will be readily apparent to those with ordinary skill in the art having the benefit of the description herein.
[0020] It may further be noted that the singular forms “a,” “an,” and “the,” as used in the specification and claims, include plural references unless the context clearly dictates otherwise. For example, the term “an article” may include a plurality of articles unless the context clearly dictates otherwise.
[0021] Additionally, those with ordinary skill in the art will appreciate that the elements in the figures are illustrated for simplicity and clarity, and are not necessarily drawn to scale. For example, the dimensions of some of the elements in the FIGS.1 and 2 may be exaggerated, relative to other elements, in order to improve the understanding of the embodiments described herein.
[0022] Further, there may be additional components described in the following application that are not depicted in one of the drawings. In the event of such component being described, but not depicted in a drawing, the absence of such a drawing should not be considered as an omission of the component from the specification.
[0023] It may be noted that the embodiments described herein are merely exemplary and can be embodied in various forms in alternative implementations. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the embodiments described herein in virtually any appropriately detailed structure. Further, the terms and phrases used herein are not intended to be limiting but rather to provide an understandable description of the embodiments of the quality management system and the present method with reference to FIGS. 1-10.
[0024] FIG. 1 illustrates a schematic block diagram of an exemplary communications network such as a digital video broadcasting (DVB) network (100) that may advantageously employ an embodiment of the present quality management system. In one embodiment, the DVB network (100) includes a content management system (102), a satellite communication system (104), Internet Protocol (IP) distribution centers (106), and multiple user devices (108). The content management system (102) includes a content provider (110), a studio (112), an encoder (114), a storage system (116), a content curation system (118), and a content verification system (120).
[0025] The content provider (110) generates and transmits digital content such as sports, news, music videos, television series, movies, and the like to the studio (112). The studio (112) employs the encoder (114) to encode the digital content to generate encoded digital content. The studio (112) subsequently transmits the encoded digital content to the storage system (116). The storage system (116) may include data centers, cloud storage servers, data servers and similar systems. In an example, when the DVB network (100) broadcasts a football match, the digital content is the video file including video frames corresponding to the football match. The content provider (110), in this particular case, uses video cameras to record the match in a video format, and the video-recorded match is transmitted to the studio (112) for post processing. After post processing by the studio (112), the video recording of the match is transmitted to the encoder (114), which encodes the video recording of the match into one of MPEG2, H.264/MPEG4 AVC and/or other video formats to generate an encoded video. The encoded video is stored in the storage system (116), and may further be transmitted to the satellite communication system (104), and eventually to various viewers using the user devices (108).
[0026] The content curation system (118) and the content verification system (120) curate and verify the digital content, respectively, that is stored in the storage system (116). The content curation system (118) discovers, gathers, and sorts the digital content into different content categories based on various parameters such as location, social rating, and the like. Further, the content curation system (118) may funnel the digital content belonging to a particular category based on its relevancy to the user devices (108).
[0027] The content verification system (120) ensures that the quality of the digital content meets a desired quality. Typically, the content verification system (120) supports a wide range of codecs for ensuring the quality of the digital content. Further, the content verification system (120) verifies whether accurate digital content is being transmitted to the user devices (108). The satellite communication system (104) receives the encoded digital content, and in turn, transmits the encoded digital content to the IP distribution centers (106), which further distribute the encoded digital content to the various user devices (108).
[0028] There is a possibility that the digital content is affected by blockiness at many points in the DVB network (100). Thus, to ensure that the quality of the digital content is not impeded by blockiness, a best practice is to monitor the quality of the digital content at strategic points in the DVB network (100). Therefore, multiple quality monitoring points may be defined within the DVB network (100) at which the quality of the digital content is to be monitored to detect, classify, and quantify blockiness in the digital content, and consequently provide a better viewing experience to viewers. In one implementation, the storage system (116), the content curation system (118), the content verification system (114), the satellite communication system (120), the IP distribution centers (106) and the user devices (108) may be selected as the quality monitoring points for managing the quality of the digital content in the DVB network (100) as the digital content moves along a defined content delivery path. One such content delivery path including a quality monitoring point is described in greater detail in conjunction with FIG. 2 below.
[0029] FIG. 2 illustrates a schematic block diagram of an exemplary content delivery path (200) as the digital content moves through a communications network, such as the DVB network (100) of FIG. 1. The content delivery path (200) includes a video transmission unit (202), a decoder (204), a quality management system (206), a video rectifier (208) (referred to hereinafter as a rectification system), and a report generator (210). The quality management system (206) further includes a processor (212) and a memory (214) to monitor the quality of the digital content.
[0030] In one embodiment, the digital content generated or received at a studio such as the studio (112) of FIG. 1 is forwarded to the video transmission unit (202) that employs the encoder (114) of FIG. 1 to generate the encoded digital content for further transmission. In one example, the encoded digital content includes an encoded video file “ENC_VID”. The video transmission unit (202) may transmit the encoded video file “ENC_VID” to the decoder (204). The decoder (204) and the quality management system (206) may be implemented in any of the aforementioned quality monitoring points, such as the storage system (116), the content curation system (118), the content verification system (114), the satellite communication system (120), the IP distribution centers (106) and the user devices (108). Since the encoded video file “ENC_VID” is encoded using techniques such as block motion compensation, the encoded video file “ENC_VID” may include blockiness artifacts. Accordingly, the decoder (204) typically includes a de-blocking filter for the removal of the blockiness artifacts from the encoded video file “ENC_VID”. However, the de-blocking filter may fail to accurately apply de-blocking algorithms to the encoded video file “ENC_VID,” for example, when the encoded video file “ENC_VID” includes fast moving sequences involving frames that change at a very high rate. Thus, the decoder (204) outputs a video file “VID_DAT” including certain blockiness artifacts to the quality management system (206). The processor (212) receives the video file “VID_DAT”, detects the blockiness artifacts in the video file “VID_DAT”, classifies the detected blockiness artifacts into macroblocking and pixelation, and generates blockiness data “BLOC_DAT”. The report generator (210) generates a blockiness report “REP” based on the blockiness data “BLOC_DAT” and outputs the blockiness reports “REP” back to the quality monitoring points of the DVB network (100).
[0031] Additionally, the blockiness data BLOC_DAT is stored in the memory (214) and may be further used for rectification, report generation, and other suitable applications. In one implementation, the video rectifier (208) receives the blockiness data “BLOC_DAT”, and implements corrective measures to improve the quality of the video file “VID_DAT” by rectifying and eliminating the blockiness artifacts. Thus, the video rectifier (208) generates a rectified video file “RECT_VID”. The video rectifier (208) subsequently transmits the rectified video file “RECT_VID” to the video transmission unit (202). In some embodiments, the blockiness data BLOC_DAT may be transmitted to the report generator (210) that generates a blockiness report “REP” for use at one or more of the quality monitoring points in the DVB network (100).
[0032] In one example, the quality management system (206) may be implemented within the storage system (116). Accordingly, the video transmission unit (202) may be configured to transmit the encoded video file “ENC_VID” generated by the encoder (114) to the storage system (116). The storage system (116) may be associated with the decoder (204) that decodes the encoded video file “ENC_VID” to generate the video file “VID_DAT.” Additionally, the storage system (116) may also be operatively coupled to the processor (212) and the memory (214) of FIG. 2 to implement other aspects of the quality management system (206). In one embodiment, the processor (212) receives and processes the video file “VID_DAT” for detecting, classifying, and quantifying the blockiness artifacts in the video file “VID_DAT.” The processor (212) further generates information corresponding to the blockiness data “BLOC_DAT” in the video file “VID_DAT” and transmits the blockiness data “BLOC_DAT” information, for example, to at least one of the video rectifier (208) for correction and/or the report generator (210) of FIG. 2 for reporting the erroneous information.
[0033] Similar monitoring approaches may be implemented by a processing device such as the processor (212) to monitor content quality in a laboratory environment, on-the-go via mobile apps, over a web interface, and/or via cloud computing. The QoE monitoring system generates monitoring data that is collected from various points in a content or video delivery path and may be stored in a cloud server and/or an associated database. The monitoring data is analyzed to generate analytical insights that may be provided to a QoE monitoring dashboard in near real-time. The QoE monitoring dashboard displays the analytical insights and/or the monitoring data to allow a user to select desired reporting and/or corrective action. Additionally, data from the QoE monitoring dashboard may also be communicated to other associated systems or applications to enable desired actions such as remote monitoring/testing, generating alerts on detection of blockiness artifacts, report fetching from the report generator (212), and so on. In cases when an analysis of the monitoring data indicates the need of a programmatic action such as generating alerts, or selecting corrective algorithms, the processor (212) successfully executes the desired programmatic actions.
[0034] Furthermore, in certain embodiments, the blockiness data “BLOC_DATA” generated by the processor (212) may be fed to a machine learning system that may use the monitoring data to predict future occurrences of blockiness artifacts and corresponding resolutions. For example, in one embodiment, the processor (212) communicates the blockiness data “BLOC_DAT to an artificial intelligence system (216).The artificial intelligence system (216) may use the blockiness data “BLOC_DAT” for pattern recognition, learning, and the like. The pattern recognition or learning information may help in identifying underlying cause of the blockiness artifacts. The causal information may then be used in various applications such as error correction, selection of appropriate encoding/decoding schemes, or routing to improve digital content delivery, thereby enhancing user experience.
[0035] Thus, implementing embodiments of the quality management system (206) at various points in a digital content delivery path facilitate identification of perceived video quality, mitigating the effects of these blocking artifacts and minimizing degradation of the quality of user experience when viewing the digital content. For example, the method may be implemented at a content provider’s end, at point of storage, verification, at distribution centers, during direct-to-home transmission, and/or at end-point devices. Certain exemplary methods employed by the quality management system (206) for detecting, classifying and quantifying the blockiness artifacts in digital content are described in greater detail with reference to FIGS. 3-8.
[0036] The exemplary method is illustrated as a collection of blocks in a logical flow chart, which represents operations that may be implemented in hardware, software, or combinations thereof. The various operations are depicted in the blocks to illustrate the functions that are performed during various phases of the exemplary method. In the context of software, the blocks represent computer instructions that, when executed by one or more processors, perform the recited operations. The order in which the exemplary method is described is not intended to be construed as a limitation, and any number of the described blocks may be combined in any order to implement the exemplary method disclosed herein, or an equivalent alternative method. Additionally, certain blocks may be deleted from the exemplary method or augmented by additional blocks with added functionality without departing from the spirit and scope of the subject matter described herein.
[0037] FIG. 3 illustrates a flowchart (300) depicting an exemplary method for detection and classification of blockiness artifacts in digital content, for example, transmitted as a video file that includes a collection of time-separated images. In one embodiment, the method may be executed by the quality monitoring system (206) of FIG. 2 to identify and classify blockiness artifacts in the video file for use in improving the content delivery mechanism to enhance end user experience.
[0038] Accordingly, at step (302), the video file is processed to retrieve a desired image from the video file. At step (304), a grayscale image is generated from the image. Further, a first binary image is obtained from the grayscale image, for example, using adaptive thresholding. At step (306), blockiness corners present in the first binary image are detected. An exemplary methodology for detecting the blockiness corners is described in detail with reference to FIG. 5.
[0039] At step (308), a check is performed to determine whether a total number of blockiness corners is greater than a first threshold. In one example, a value of the first threshold may be selected to be 250 when processing the image with dimensions corresponding to 1366x768 pixels. At step (308), if it is determined that the total number of blockiness corners is less than the first threshold, step (310) is executed. At step (310), the retrieved image is identified as a frame not affected by blockiness artifacts. Alternatively, at step (308), if it is determined that the total number of blockiness corners is greater than the first threshold, step (312) is executed. At step (312), a check is performed to determine whether the blockiness corners in the first binary image are generated due to presence of text present in the originally retrieved image. An exemplary method for determining whether the blockiness corners in the originally retrieved image are generated due to the text is described in greater detail with reference to FIGS. 6A and 6B. If it is determined that the blockiness corners in the first binary image are generated due to the presence of text, step (310) is executed and the retrieved image is identified as a frame not affected by blockiness artifacts.
[0040] Alternatively, at step (312), if it is determined that the blockiness corners in the first binary image are not generated due to presence of text, step (314) is executed. At step (314), the grayscale image undergoes intensity equalization, wherein the intensities of all pixels of the grayscale image are made equal, and a second binary image is generated from the grayscale image. At step (316), a check is performed to determine whether the blockiness corners in the second binary image are generated due to a mesh-like pattern. If it is determined at step (316) that the blockiness corners are generated due to a mesh-like pattern, step (310) is executed and the retrieved image is identified as a frame not affected by blockiness artifacts. An exemplary method for determining whether the blockiness corners in the originally retrieved image are generated due to mesh-like patterns is described in greater detail with reference to FIGS. 7A and 7B. The blockiness corners, which are determined to be generated due to presence of text or mesh-like pattern in the retrieved image, are referred to herein as a first set of blockiness corners.
[0041] However, at step (316), if it is determined that the blockiness corners are not generated due to a mesh-like pattern, step (318) is executed. At step (318), the remaining blockiness corners are identified as blockiness artifacts, and are classified as macroblocking artifacts and/or pixelation artifacts. These remaining blockiness corners are referred to herein as a second set of blockiness corners. An exemplary method for classification of the remaining blockiness corners in the image as macroblocking or pixilation artifacts is described in greater detail with reference to FIG. 8. Classification of the blockiness corners provides more specific information that may be used by an associated system to accurately identify causal factors and/or to select corrective actions that are most suited to remedy the identified causal factors. Quick and accurate identification and implementation of the corrective actions improve the digital content quality, in turn enhancing the user experience. Certain aspects of the overall method for detection and classification of blockiness artifacts in digital content presented in FIG. 3 for use in various applications for enhancing user experience are described in greater detail with reference to FIGS. 4-8.
[0042] Generally, digital content such as movies shot in a widescreen aspect ratio are pre-processed before transmission to suit varied user device configurations. For example, in one embodiment, the video file undergoes letterboxing to convert the video file to standard width video formats that preserve the original aspect ratio of the video file on user displays of different sizes. However, the letterboxing may cause false edge detection at the border between matte and the original video frame. Therefore, in order to prepare the video file for detecting and classifying the blockiness corners, the letterboxing is removed. An exemplary method for removing the letterboxes from the frame is described in greater detail with reference to FIG. 4.
[0043] FIG. 4 illustrates a flowchart (400) depicting an exemplary method for detection and removal of the horizontal and vertical letterboxes from an image retrieved from a video file. As previously noted, letterboxing is the practice of transferring a frame of the video file in a widescreen aspect ratio to standard width video formats while preserving the video's original aspect ratio. Accordingly, at step (402), the desired image is retrieved from the video file. At step (404), intensities of all pixels in predefined areas, for example, top, bottom, left and right areas of the image are determined. Additionally, a predefined width of the predefined top, bottom, left and right areas including the horizontal letterboxes is calculated. Similarly, a predefined width of any vertical letterboxes may also be calculated. The predefined width, for example, may be calculated using equation (1):

L =Cieling(w_c*((1/AR_c -1/AR_T ))/2) (1)

wherein L corresponds to the predefined width, ARC corresponds to the aspect ratio of the screen of the viewer, ART corresponds to a standard aspect ratio used in anamorphic theatrical showings (2.39:1), and Wc corresponds to the width of the screen of the viewer.

[0044] At step (406), mean values of the intensities of all the pixels in the predefined top, bottom, left and right areas are determined. Typically, letterboxes correspond to regions that are black in color, and thus, have zero pixel intensity. Accordingly, at step (408), a check is performed to determine if the mean pixel intensities in the predefined top, bottom, left and right areas are equal to zero. If it is determined that the mean pixel intensities in the predefined top, bottom, left and right areas are equal to zero, steps (410) and (412) are executed. At step (410), it is determined that the image includes a vertical or a horizontal letterbox in the predefined area having the mean intensity equal to zero. Subsequently, at step (412), one or more of the identified letterboxes are eliminated from the frame to retrieve the image. Execution subsequently moves to step (414).
[0045] Alternatively, at step (408), if it is determined that the means of pixel intensities in the predefined top, bottom, left and right areas is not equal to zero, the processor (212) determines that the image does not include a letterbox in the predefined top, bottom, left and/or right areas, and step (414) is directly executed. At step (414), the image is converted into the grayscale image. At step (416), the grayscale image is converted to the first binary image, for example, by using adaptive thresholding. Once the letterboxes are removed, the resulting image undergoes further processing to detect the blockiness corners.
[0046] FIG. 5 illustrates a flowchart (500) depicting an exemplary method for generating horizontal and vertical line maps for detecting the blockiness corners in the first binary image generated at step (416) of FIG. 4. Blockiness artifacts in the first binary image are often introduced during encoding of the video file using macroblocks. The macroblocks are typically rectangular, hence detection of the blockiness corners in the first binary image proves useful in detecting the blockiness artifacts in the image. Generally, corners or edges of the macroblocks are step edges. Calculating the gradient of the first binary image, therefore, can extract these edges efficiently.
[0047] Accordingly, at step (502), one or more horizontal and vertical gradient images are generated from the first binary image by calculating difference of each pixel in the first binary image with adjacent vertical and horizontal pixels in the first binary image. The horizontal and vertical gradient images are indicative of the location of the horizontal and vertical lines in the first binary image. At step (504), horizontal and vertical line map images are generated from horizontal and vertical gradient images. Specifically, vertical lines present in the first binary image are determined using the horizontal gradient image. Similarly, horizontal lines present in the first binary image are determined using the vertical gradient image. Determination of the horizontal and vertical lines is achieved, for example, by using the Hough transform. At step (506), a pixel-by-pixel multiplication of the horizontal and vertical line map images is performed to generate a corner map image. The corner map image indicates the locations of the intersections of the horizontal and vertical lines, which are indicative of location of potential blockiness corners in the image. At step (508), a total number of blockiness corners (n) in the corner map image, and coordinates of each of the blockiness corners are determined and stored in an associated memory device such as memory (214).
[0048] Further, at step (510), a check is performed to determine if the total number of blockiness corners (n) in the corner map image is greater than the first threshold. If it is determined, at step (510), that the total number of blockiness corners (n) is less than the first threshold, step (512) is executed. At step (512), the frame is determined to be not affected by blockiness artifacts. If it is determined, at step (510), that the total number of blockiness corners (n) is greater than the first threshold, it is determined whether the detected blockiness corners are actually blockiness artifacts, or if the blockiness corners are generated due to false positive causal attributes such as an object comprising a ninety degree corner, text, mesh, ladder, or brick. An exemplary method for detecting whether the blockiness corners are generated due to text is described in greater detail with reference to FIG. 6A.
[0049] FIG. 6A illustrates a flowchart (600) depicting an exemplary method for detection of text, a false positive causal attribute, in the image. Detection of the text in the image is necessary to avoid false positives when detecting the blockiness artifacts. Text is one of the prominent areas in the image where false corners are detected because of the sharp corners created, for example, by English alphabets. Accordingly, the present method employs a technique based on morphological transformation in conjunction with contour detection of connected components to detect the presence of text in the image.
[0050] To that end, at step (602), the retrieved image is converted into the grayscale image. At step (604), a morphological gradient image is generated from the grayscale image. The morphological gradient image is calculated to enhance the edges in the grayscale image. In one exemplary implementation, a 3x3 ellipse kernel is selected for the morphological gradient image calculation in order to reduce the performance overhead. Further, at step (606), the morphological gradient image is converted to the second binary image. At step (608), a morphological closing operation is performed on the second binary image to calculate connected components for the text in the second binary image. Specifically, the morphological closing operation connects adjacent horizontal components in the second binary image. Typically, most letters of the text are connected by the morphological closing operation. At step (610), the connected components are grouped together for detecting text contours. At step (612), minimum area rectangles bounding each of the detected text contours are calculated. The potential text contours bounded within the minimum area rectangles are further evaluated to confirm identification of the blockiness corners caused due to the text to prevent false positive scenarios.
[0051] FIG. 6B illustrates a flowchart (613) depicting an exemplary method for evaluating each minimum area rectangle for confirm identification of the blockiness corners and discarding the blockiness corners caused due to text. To that end, at step (614), a series of verification steps (616) to (630) are executed for each of the minimum area rectangles bounding the identified text contours to determine if the minimum area rectangle actually bounds text or other features. In one embodiment, each of the minimum area rectangles is evaluated based on certain designated criteria to identify if it includes text.
[0052] In an exemplary implementation, the first criterion may be selected by considering the width of the popular fonts in the English language. A second threshold for a percentage of the text contributing to the minimum area rectangle can be selected to minimize false detection. In one implementation, a value of 50% is set as the second threshold based on an extensive analysis of texts from a set of test images, specifically based on a ratio of pixels corresponding to the text and the minimum area rectangle. Accordingly, at step (616), a check is performed to determine for each minimum area rectangle whether the ratio of area of a text contour to its corresponding minimum area rectangle is less than the second threshold. If it is determined, at step (616), that the ratio of area of the text contour to its corresponding minimum area rectangle is greater than the second threshold, step (618) is executed. Specifically, at step (618), such a minimum area rectangle is determined to be generated due to text, and therefore, is discarded and does not undergo any further processing. However, at step (616), if it is determined that the ratio of area of the text contour to its corresponding minimum area rectangle is less than the second threshold, step (620) is executed.
[0053] At step (620), a check is performed to determine if each of the remaining minimum area rectangles satisfies a second criterion, that is, whether height of the minimum area rectangle is greater than its width. If it is determined, at step (620), that the height of a minimum area rectangle bounding the text contour is less than the width thereof, step (618) is executed and the minimum area rectangle is discarded. However, at step (620), if it is determined that the height of the minimum area rectangle bounding the text contour is greater than the width thereof, step (622) is executed. At step (622), a check is performed to determine if the minimum area rectangle also satisfies a third criterion that is, whether the area of the minimum area rectangle is less than a third threshold. For example, a minimum area rectangle greater than 8x8 = 16 pixel square may be assumed to be text as text which uses a matrix with less than 8x8 pixels for a single letter may be difficult to read. In an exemplary implementation, the third threshold is designated to be 0.25, that is, the area of minimum area rectangle is expected to be less than one-fourth of the area of an entire image, for example having dimensions 1366x768. At step (622), if it is determined that the area of a minimum area rectangle is greater than the third threshold, step (618) is executed and the minimum area rectangle is discarded. However, at step (622), if it is determined that the area of the minimum area rectangle is less than the third threshold, step (624) is executed.
[0054] At step (624), a difference N1 between the total number of blockiness corners (n) in the image and a number of blockiness corners within the minimum area rectangle being evaluated is determined. At step (626), a fourth criterion is verified for the minimum area rectangle. The fourth criterion involves a determination of whether the difference N1 is less than the first threshold. If it is determined, at step (626), that the difference N1 is less than the first threshold, step (628) is executed. At step (628), the blockiness corners in the minimum area rectangles are determined to be generated due to the text in the image. At step (630), these blockiness corners, are therefore, discarded as false positives. However, if it is determined, at step (626), that the difference N1 is greater than the first threshold, step (632) is executed. At step (632), the blockiness corners in the minimum area rectangles are determined to be generated due to macroblocking and/or pixelation.
[0055] Further, FIG. 7A illustrates a flowchart (700) depicting an exemplary method for detection of a mesh-like pattern, which corresponds to another false positive causal attribute, in the image. Since mesh-like patterns can cause horizontal and vertical lines in the image, these patterns may also generate corners in the image. Presence of a mesh-like pattern, thus, is a possible cause for false positive detection of the blockiness artifacts. Therefore, the present method determines if the detected blockiness corners are generated due to mesh-like pattern resulting from presence of objects such as bricks, ladders, and nets, or by actual blockiness artifacts.
[0056] At step (702), the retrieved image is converted into the grayscale image. At step (704), the grayscale image undergoes intensity equalization, and is subsequently converted into a third binary image, for example, using adaptive thresholding. At step (706), the third binary image undergoes a morphological closing operation to calculate connected components for detecting mesh-like patterns present in the third binary image. Step (706) ensures that most of the identified mesh-like patterns detected in the image are connected together. Subsequently, at step (708), potential mesh contours encompassing the connected components for the mesh-like pattern are generated. Further, at step (710), minimum area rectangles for each of the mesh contours are generated. The mesh contours bounded within the minimum area rectangles are further evaluated to confirm identification of the blockiness corners caused due to the mesh-like patterns to prevent false positive scenarios.
[0057] FIG. 7B illustrates a flow chart (711) depicting an exemplary method for further evaluation of minimum area rectangles bounding potential mesh-like patterns in the image. In particular, the mesh contours of FIG. 7A are further evaluated to identify actual and false positive blockiness artifacts in the image based on one or more criteria. At step (712), a check is performed to verify a fifth criterion that is, to determine whether a total number of blockiness corners generated due to potential mesh-like patterns is greater than a fourth threshold. For example, in one implementation, the fourth threshold may be 600 blockiness corners. In one example, the fourth threshold may be determined via trial and error. If it is determined, at step (712), that the total number of blockiness corners is greater than the fourth threshold, step (714) is executed. At step (714), the blockiness corners are determined to be generated due to a mesh-like pattern. Therefore, at step (716), these blockiness corners are discarded from any further processing. However, at step (712), if it is determined that the total number of blockiness corners is less than the fourth threshold, steps (717) and (718) are executed.
[0058] At step (717), the binary image is divided into a desired number of regions. Further, step (718) entail verification of a sixth criterion that corresponds to determining concentration of meshes in the multiple regions. However, if the mesh-like pattern spreads across more than one region, it may lead to impaired calculation of mesh concentration, leading to inaccurate classification of the mesh as blockiness artifacts. To avoid the false calculation of density due to the spread of the mesh-like pattern across adjacent regions, the method verifies the density of two adjacent regions as well. Accordingly, at step (718), it is determined whether a count of mesh contours in a particular region and an adjacent region is greater than a fifth threshold, for example 50. If it is determined, at step (718), that a count of mesh contours in a particular region and an adjacent region is greater than the fifth threshold, step (714) is executed and the blockiness corners are determined to be generated due to a mesh-like pattern, and therefore, are discarded from any further processing at step (716). However, at step (718), if it is determined that a count of mesh contours in the particular region and/or an adjacent region is less than the fifth threshold, step (720) is executed.
[0059] At step (720), a check is performed to verify a seventh criterion, that is, to determine for each of the mesh contours, whether their areas are between a sixth threshold and a seventh threshold. If it is determined, at step (720), that the area of a particular mesh contour lies between the sixth and seventh thresholds, step (714) is executed, that is, the blockiness corners are determined to be generated due to a mesh-like pattern, and therefore, are discarded from any further processing. However, at step (720), if it is determined that the area of a particular mesh contour lies outside the sixth and seventh thresholds, step (722) is executed. In one embodiment, if the area of the mesh contour lies outside designated threshold values of 5000 and 16 pixel square, it is determined that the blockiness corners are due to mesh and are discarded. At step (722), a check is performed to verify an eighth criterion, that is, to determine whether areas of each of the minimum area rectangles bounding the potential mesh contours are greater than an eighth threshold. In one example, the eighth threshold may be selected as 1.9 times the contour area for a square mesh. If it is determined, at step (722), that the area of a particular minimum area rectangle is greater than the eighth threshold, step (714) is executed as the corners are determined to be generated due to a mesh-like pattern, and therefore, are discarded from any further processing. Thus, at step (716), these blockiness corners are discarded as false positives.
[0060] Further, FIG. 8 is a flowchart (800) depicting an exemplary method for classifying the remaining blockiness corners in the image that are determined as not being caused by any false positive causal attribute such as presence of text or mesh-like patterns. Macroblocks including the remaining blockiness corners may be affected by either macroblocking or pixelation. A macroblock is pixelated, if all pixels in the macroblock have the same luminance and chrominance value, or all columns or rows of pixels have the same luminance or chrominance value. To detect the macroblocks that are pixelated, the present method iterates through all the remaining blockiness corners that have been generated due to the blockiness artifacts.
[0061] Each of the remaining blockiness corners is iteratively evaluated for classifying the corner as being caused by macroblocking or by pixelation. Accordingly, a count representative of each corner “i” that is currently being evaluated is initialized as “1”. Further, at step (802), a check is performed to determine whether the count “i” is less than a total number “p” of remaining blockiness corners to ensure that all remaining corners are classified one by one. If it is determined, at step (802), that the count “i” is less than the total number “p,” step (804) is executed. At step (804), a standard deviation of pixel intensities of all rows and columns corresponding to each macroblock (for example, about 8x8 size) containing the ith remaining blockiness corner is calculated. At step (806), a check is performed to determine whether the standard deviation of pixel intensities for each macroblock containing the ith remaining blockiness corner is less than a ninth threshold. If it is determined, at step (806), that the standard deviation is less than the ninth threshold, step (808) is executed. In one embodiment, the ninth threshold is 0.001. Since any pixelated macroblock has nearly same luminance and chrominance values for all its pixels, if the standard deviation is less than the ninth threshold, at step (808), the ith corner is determined to be generated due to pixelation. If it is determined, at step (806), that the standard deviation is not less than the ninth threshold, step (810) is executed. At step (810), the ith corner is determined to be caused by macroblocking. Subsequently, at step (812), the count “i” is incremented by one to evaluate the next remaining corner and the method continues with step (802) until all remaining corners have been classified as pixelation or macroblocking corners. In one embodiment, the total number of remaining blockiness corners “p” may be defined as the blockiness index of the video file. In another embodiment, however, the individual numbers of the macroblocking and pixelation artifacts may be presentative of the blockiness index of the video file. The blockiness index is indicative of the quality of the video file, and may provide useful information for subsequent analysis and selection of corrective action.
[0062] After iterating through all remaining blockiness corners, at step (802), when it is determined that the count of the remaining corners is not less than the total number of remaining corners, step (814) is executed. At step (814), a check is done to determine if a ratio of the number of blockiness corners that are affected by pixelation to the total number of remaining blockiness corners “p” in the image is greater than the a tenth threshold. In one embodiment, the tenth threshold is 0.3. At step (814), if it is determined that the ratio of the number of blockiness corners that are affected by pixelation to the total number of remaining blockiness corners in the image is greater than the tenth threshold, step (816) is executed. At step (816), it is determined that the image has been affected by pixelation. If it is determined, at step (814), that the ratio of the number of blockiness corners that are affected by pixelation to the total number of remaining blockiness corners in the image is greater than the tenth threshold, step (818) is executed. At step (818), it is determined that the image has been affected by macroblocking.
[0063] Embodiments of the present systems and methods, thus, provide an efficient process for identifying, measuring, and determining the type of blockiness artifacts in digital content without use of a reference image. Particularly, performance of embodiments of the methods described herein for detecting and classifying actual blockiness artifacts provide a high correlation with a subjective quality of experience. The disclosed method provides near real-time performance and can be used for analysis of various online or offline digital media sources. In one embodiment, the various thresholds described in the present specification are derived, for example, by using a set of over a thousand test images with dimensions of 1366x768 pixels along with different models of human visual systems.
[0064] Although specific features of various embodiments of the present system and exemplary methods may be shown in and/or described with respect to some drawings and not in others, this is for convenience only. It is to be understood that the described features, structures, and/or characteristics may be combined and/or used interchangeably in any suitable manner in the various embodiments.
[0065] While various embodiments of the present system and method have been illustrated and described, it will be clear that the present system and method is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the present system and method, as described in the claims.

,CLAIMS:1. A method for managing quality of digital content, the method comprising:
generating one or more horizontal and vertical line map images from an image retrieved from the digital content;
detecting one or more blockiness corners in the image based on the one or more horizontal and vertical line map images;
identifying a first set of blockiness corners from the detected blockiness corners, wherein the first set of blockiness corners is determined to be generated due to one or more false positive causal attributes;
identifying a second set of blockiness corners from the detected blockiness corners as blockiness artifacts, wherein the second set of blockiness corners is different from the first set of blockiness corners and a total number of the second set of blockiness corners is greater than a designated threshold;
classifying each corner in the second set of blockiness corners into one or more types of blockiness artifacts; and
transmitting information corresponding to one or more of the retrieved image, the second set of blockiness corners, and the corresponding types of blockiness artifacts to a quality management system.

2. The method as claimed in claim 1, wherein detecting the one or more blockiness corners comprises:
converting the retrieved image into a binary image;
generating one or more horizontal and vertical gradient images based on the first binary image;
generating the one or more vertical and horizontal line map images from the one or more horizontal and vertical gradient images, respectively; and
detecting the one or more blockiness corners by performing a pixel-by-pixel multiplication of the one or more horizontal line map images with the one or more vertical line map images.

3. The method as claimed in claim 1, further comprising:
determining a mean pixel intensity corresponding to each of one or more designated sections of the retrieved image;
detecting at least one letterbox in the one or more designated sections if the corresponding mean pixel intensity is outside a defined threshold; and
eliminating the at least one detected letterbox from the retrieved image.

4. The method as claimed in claim 1, further comprising determining that the retrieved image is not affected by blockiness artifacts upon determining that a total number of the detected blockiness corners is less than a first threshold.

5. The method as claimed in claim 1, further comprising processing each of the one or more blockiness corners to determine presence of the one or more false positive causal attributes in the retrieved image when a total number of the detected blockiness corners is greater than a first threshold, wherein the one or more false positive causal attributes comprise one or more of an object comprising a ninety degree corner, text, mesh, ladder, and brick.

6. The method as claimed in claim 5, wherein identifying the first set of blockiness corners that are generated due to text comprises:
converting the retrieved image into a grayscale image;
generating a morphological gradient image based on the grayscale image;
converting the morphological gradient image into a binary image;
calculating one or more connected components in the binary image for the one or more false positive causal attributes present in the binary image;
generating one or more contours corresponding to the one or more connected components;
generating a set of minimum area rectangles such that each contour in the one or more contours is bounded by at least one minimum area rectangle from the set of minimum area rectangles;
determining if a ratio of area of the contour to the corresponding minimum area rectangle is less than a second threshold for each minimum area rectangle in the set of minimum area rectangles;
determining if a height of the minimum area rectangle is less than a width thereof when the determined ratio is greater than a second threshold;
determining if an area of the minimum area rectangle is less than a third threshold when the height of the minimum area rectangle is less than the width thereof;
discarding the minimum area rectangle when the determined ratio is greater than the second threshold, the height of the minimum area rectangle is less than the width thereof, the area of the minimum area rectangle is greater than the third threshold, or combinations thereof;
computing a difference between a total number of the detected blockiness corners and a number of blockiness corners in the minimum area rectangle for each undiscarded minimum area rectangle in the set of minimum area rectangles;
identifying the first set of blockiness corners as corners generated due to text when the computed difference is less than the first threshold; and
eliminating the first set of blockiness corners from the one or more blockiness corners to obtain the second set of blockiness corners.

7. The method as claimed in claim 5, wherein identifying the first set of blockiness corners that are generated due to mesh attributes comprises:
generating a grayscale image from the retrieved image;
equalizing pixel intensities corresponding to a plurality of pixels in the grayscale image;
generating a binary image based on the grayscale image having equalized pixel intensities;
calculating one or more connected components in the binary image;
generating one or more contours corresponding to the one or more connected components;
generating a set of minimum area rectangles such that at least one minimum area rectangle from the set of minimum area rectangles bounds each contour in the one or more contours, wherein the set of minimum area rectangles comprises the first set of blockiness corners;
dividing the binary image into a plurality of regions;
determining if a count of the one or more contours in a selected region from the plurality of regions, and a region adjacent to the selected region is greater than a fifth threshold;
determining, for each contour in the one or more contours, if a corresponding area is between a sixth and a seventh threshold when the determined count is less than the fifth threshold;
determining, for each contour in the one or more contours, if area of a minimum area rectangle bounding the contour is determined to be greater than an eighth threshold upon determining that the area of the contour is between the sixth and seventh threshold;
identifying the first set of blockiness corners as corners generated due to mesh attributes when the area of the minimum area rectangle bounding the contour is determined to be greater than the eighth threshold;; and
eliminating the first set of blockiness corners from the one or more blockiness corners in the retrieved image to obtain the second set of blockiness corners.

8. The method as claimed in claim 1, wherein classifying each corner in the second set of blockiness corners into one or more types of blockiness artifacts, comprises:
dividing the retrieved image into one or more macroblocks;
iteratively selecting each corner in the second set of blockiness corners;
iteratively identifying a macroblock from the one or more macroblocks that comprises the selected corner;
iteratively calculating a standard deviation of all pixels in the identified macroblock;
determining that the selected corner is generated due to pixelation if the standard deviation is less than a ninth threshold; and
determining that the selected corner is generated due to macroblocking if the standard deviation is greater than the ninth threshold.

9. The method as claimed in claim 8, further comprising computing a blockiness index based on one or more of a total count of corners in the second set of blockiness corners, a total count of corners in the second set of blockiness corners that are generated due to pixilation, and a total count of corners in the second set of blockiness corners that are generated due to macroblocking.

10. The method of claim 8, further comprising:
computing a ratio of a total count of corners in the second set of blockiness corners that are generated due to pixilation and a total count of corners in the second set of blockiness corners that are generated due to macroblocking;
determining that the retrieved image is affected by pixelation if the computed ratio is greater than a tenth threshold; and
determining that the retrieved image is affected by macroblocking if the computed ratio is less than the tenth threshold;

11. The method as claimed in claim 1, wherein the method for managing quality of the digital content is implemented in at least one of a content provider system (110), a network storage system (116), a content distribution system (106), a content curation system (118), a content verification system (120), and an end-point user device (108).

12. A quality management system (206) for digital content, comprising:
a memory for storing the digital content; and
a processor configured to perform the steps of:
generating one or more horizontal and vertical line map images from an image retrieved from the digital content;
detecting one or more blockiness corners in the image based on the one or more horizontal and vertical line map images;
identifying a first set of blockiness corners from the detected blockiness corners, wherein the first set of blockiness corners is determined to be generated due to one or more false positive causal attributes;
identifying a second set of blockiness corners from the detected blockiness corners as blockiness artifacts, wherein the second set of blockiness corners is different from the first set of blockiness corners and a total number of the second set of blockiness corners is greater than a designated threshold;
classifying each of the second set of blockiness corners into one or more types of the blockiness artifacts; and
transmitting information corresponding to one or more of the image, the second set of blockiness corners, and the corresponding types of blockiness artifacts to an output device.

13. The quality management system (206) as claimed in claim 12, wherein the output device further comprises one or more of:
an artificial intelligence system (216) configured to receive the information, and to identify a cause for the blockiness artifacts based on the information;
a rectification system (208) configured to receive the information, and eliminate one or more of the second set of blockiness artifacts from the image based on the information; and
a report generator (210) configured to receive the information, and generate a quality report based on the information.

14. The quality management system (206) of claim 12, wherein the quality management system (206) is implemented in at least one of a content provider system (110), a network storage system (116), a content distribution system (106), a content curation system (118), a content verification system (120), and an end-point user device (108).

Documents

Application Documents

# Name Date
1 201641030686-IntimationOfGrant16-10-2023.pdf 2023-10-16
1 Power of Attorney [08-09-2016(online)].pdf 2016-09-08
2 201641030686-PatentCertificate16-10-2023.pdf 2023-10-16
2 Form 5 [08-09-2016(online)].pdf 2016-09-08
3 Form 3 [08-09-2016(online)].pdf 2016-09-08
3 201641030686-Written submissions and relevant documents [23-08-2023(online)].pdf 2023-08-23
4 201641030686-Correspondence to notify the Controller [31-07-2023(online)].pdf 2023-07-31
5 Description(Provisional) [08-09-2016(online)].pdf 2016-09-08
5 201641030686-FORM-26 [31-07-2023(online)].pdf 2023-07-31
6 abstract 201641030686.jpg 2016-10-27
6 201641030686-US(14)-HearingNotice-(HearingDate-10-08-2023).pdf 2023-07-27
7 Form 3 [18-05-2017(online)].pdf 2017-05-18
7 201641030686-FER.pdf 2021-10-17
8 Form 18 [18-05-2017(online)].pdf 2017-05-18
8 201641030686-CLAIMS [07-06-2021(online)].pdf 2021-06-07
9 201641030686-CORRESPONDENCE [07-06-2021(online)].pdf 2021-06-07
9 Drawing [18-05-2017(online)].pdf 2017-05-18
10 201641030686-DRAWING [07-06-2021(online)].pdf 2021-06-07
10 Description(Complete) [18-05-2017(online)].pdf_262.pdf 2017-05-18
11 201641030686-FER_SER_REPLY [07-06-2021(online)].pdf 2021-06-07
11 Description(Complete) [18-05-2017(online)].pdf 2017-05-18
12 201641030686-FORM 3 [07-06-2021(online)].pdf 2021-06-07
12 Assignment [18-05-2017(online)].pdf 2017-05-18
13 201641030686-PETITION UNDER RULE 137 [07-06-2021(online)].pdf 2021-06-07
13 Form2 Title Page_Complete_22-06-2018.pdf 2018-06-22
14 Correspondence by Agent_Form1_22-06-2018.pdf 2018-06-22
14 Correspondence by Agent_Power of Attorney and Declaration_22-06-2018.pdf 2018-06-22
15 Correspondence by Agent_Form1_22-06-2018.pdf 2018-06-22
15 Correspondence by Agent_Power of Attorney and Declaration_22-06-2018.pdf 2018-06-22
16 Form2 Title Page_Complete_22-06-2018.pdf 2018-06-22
16 201641030686-PETITION UNDER RULE 137 [07-06-2021(online)].pdf 2021-06-07
17 Assignment [18-05-2017(online)].pdf 2017-05-18
17 201641030686-FORM 3 [07-06-2021(online)].pdf 2021-06-07
18 201641030686-FER_SER_REPLY [07-06-2021(online)].pdf 2021-06-07
18 Description(Complete) [18-05-2017(online)].pdf 2017-05-18
19 201641030686-DRAWING [07-06-2021(online)].pdf 2021-06-07
19 Description(Complete) [18-05-2017(online)].pdf_262.pdf 2017-05-18
20 201641030686-CORRESPONDENCE [07-06-2021(online)].pdf 2021-06-07
20 Drawing [18-05-2017(online)].pdf 2017-05-18
21 201641030686-CLAIMS [07-06-2021(online)].pdf 2021-06-07
21 Form 18 [18-05-2017(online)].pdf 2017-05-18
22 201641030686-FER.pdf 2021-10-17
22 Form 3 [18-05-2017(online)].pdf 2017-05-18
23 201641030686-US(14)-HearingNotice-(HearingDate-10-08-2023).pdf 2023-07-27
23 abstract 201641030686.jpg 2016-10-27
24 Description(Provisional) [08-09-2016(online)].pdf 2016-09-08
24 201641030686-FORM-26 [31-07-2023(online)].pdf 2023-07-31
25 201641030686-Correspondence to notify the Controller [31-07-2023(online)].pdf 2023-07-31
26 Form 3 [08-09-2016(online)].pdf 2016-09-08
26 201641030686-Written submissions and relevant documents [23-08-2023(online)].pdf 2023-08-23
27 Form 5 [08-09-2016(online)].pdf 2016-09-08
27 201641030686-PatentCertificate16-10-2023.pdf 2023-10-16
28 Power of Attorney [08-09-2016(online)].pdf 2016-09-08
28 201641030686-IntimationOfGrant16-10-2023.pdf 2023-10-16

Search Strategy

1 search(21)E_30-11-2020.pdf

ERegister / Renewals