Sign In to Follow Application
View All Documents & Correspondence

Dynamic Thresholding Technique For Copy Move Forgery Detection In Low Quality Images

Abstract: Disclosed a system for detecting copy-move forgeries in low-quality images, comprising modules for data collection, preprocessing, feature extraction, and feature matching. Utilizing a combination of grayscale conversion, denoising, and contrast enhancement, the system prepares images for deep learning analysis. A Convolutional Neural Network extracts features, which are then compared using a Siamese Network to identify forgeries. Multi-scale segmentation, dynamic thresholding, and cluster analysis localize the forgery, while validation and boundary delineation confirm and define the altered areas. The system outputs the final image, marking the detected forgeries, thus maintaining digital image integrity. Drawings / Fig. 1 / Fig. 2 /Fig. 3 / Fig. 4

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
26 April 2024
Publication Number
23/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

MARWADI UNIVERSITY
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
MS. PARITA MER
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
MS. RESHMA SUNIL
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
DR. ANJALI DIWAN
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA

Inventors

1. MS. PARITA MER
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
2. MS. RESHMA SUNIL
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
3. DR. ANJALI DIWAN
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA

Specification

Description:Field of the Invention

The present disclosure relates to digital image forensics, particularly a system for detecting copy-move forgeries in low-quality images using advanced deep learning techniques.
Background
The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
In the domain of digital image forensics, the detection of copy-move forgery constitutes a significant challenge, particularly in images of substandard quality. Copy-move forgery involves the process where segments of an image are replicated and concealed within the same image, distorting the authenticity of the content. Said form of tampering is frequently employed to falsify information or to create misleading representations in digital media. The intricacies of said issue are exacerbated when the resolution and clarity of the images are compromised, as is often the case with low-quality images that are prone to high noise levels and reduced contrast.
Traditional methods for the detection of such forgeries often rely on precise feature matching and pattern recognition techniques. However, said methods encounter substantial obstacles when applied to low-quality images. The efficacy of said conventional approaches is considerably hindered under conditions where noise and compression artifacts are prevalent, which is indicative of the limitations inherent in said methods when addressing the nuances of poor-quality images.
Further, said conventional methods commonly employ exact thresholding techniques to determine the presence of forgery within an image. Said thresholding, while effective under optimal conditions, fails to account for the variability and the degraded nature of low-quality images. The static nature of said thresholds often results in a significant number of false positives or negatives, thereby undermining the reliability of the forgery detection process.
The shortcomings of prior art are particularly noticeable in the absence of adaptive mechanisms to cater to the dynamic nature of image quality and content. Said methods lack the ability necessary to dynamically adjust to the inconsistencies present within low-quality images. Moreover, the issue of scalability poses another challenge, as the volume of digital content proliferates, the requirement for automated and robust forgery detection systems becomes more pronounced. Existing methods, with the reliance on manual parameter tuning and static threshold values, do not scale well with the increasing volume and variety of digital images.
Additionally, methods based on simple feature extraction are often unable to detect forgeries where the copied segment has been skilfully blended into the target location. Said inadequacy arises from the inability of such methods to discern the subtle manipulations that can be camouflaged by the low quality of the image per se. Therefore, there is a palpable need for an approach that incorporates more advanced techniques, such as deep learning, which can detect anomalies with a higher degree of capability.
Prior art lacked reliable detection of copy-move forgeries, particularly in the challenging context of low-quality images. As the impact of digital imagery continues to expand across various sectors, the importance of establishing veracity and combating misinformation through enhanced forgery detection mechanisms becomes paramount. Prior art lacked in technological progression making pivotal contribution to the field of digital image forensics and maintain the integrity of digital information. Thus, there exists a persistent necessity for a system that employs dynamic thresholding and advanced feature extraction methods is evident. In essence, the prior art within the field of digital image forensics demonstrates a clear requirement for improvement.
Summary
The following presents a simplified summary of various aspects of this disclosure in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements nor delineate the scope of such aspects. Its purpose is to present some concepts of this disclosure in a simplified form as a prelude to the more detailed description that is presented later.
The following paragraphs provide additional support for the claims of the subject application.
The disclosure pertains to a system for detecting copy-move forgery in low-quality images. Said system comprises a data collection module configured to receive an input image suspected of forgery. A data preprocessing module is included to convert the input image to grayscale, apply a Non-Local Means Denoising filter to remove image noise, and enhance the input image using Contrast Limited Adaptive Histogram Equalization (CLAHE).
Furthermore, a deep feature extraction module comprising a pre-trained Convolutional Neural Network (CNN) is provided for extracting a set of deep features from the pre-processed image. Moreover, a feature matching module utilizes a Siamese Network architecture to compare features across regions of the image for identification of distinctions and similarities indicative of copy-move forgery.
An adaptive multi-scale segmentation module is configured to segment the input image into super pixels at multiple scales to facilitate analysis of macro and microstructures within the image. A dynamic thresholding and match filtering module adjusts thresholds for match acceptance based on confidence scores and geometric consistency of matched regions. Furthermore, a forgery localization with cluster analysis module is included for grouping refined matches using spectral clustering to localize forgeries within the image. Moreover, a validation and enhancement module comprise a CNN classifier to validate suspected forgery areas and edge-detection algorithms to delineate the exact boundaries of such areas. An output module is configured to display the input image with the highlighted suspected areas of the copy-move forgery.
The data preprocessing module is further configured to perform chromatic aberration correction on the input image to correct colour distortions before converting the input image to grayscale. The deep feature extraction module utilizes a CNN selected from the group consisting of VGG16, ResNet, and Inception-v3, each optimized for capturing distinct image features relevant to forgery detection.
The feature matching module further comprises a similarity assessment protocol configured for the identification of the similarities between features using a threshold derived from the training phase of the Siamese Network. The adaptive multi-scale segmentation module employs Simple Linear Iterative Clustering (SLIC) to generate super pixels, with the scale of segmentation adjustable based on the resolution and quality of the input image.
The dynamic thresholding and match filtering module includes a learning-based component to update the threshold criteria using feedback from past forgery detection results. The forgery localization with cluster analysis module utilizes a machine learning clustering algorithm selected from the group consisting of k-means, hierarchical clustering, and Gaussian mixture models to refine the grouping of matched features.
The validation and enhancement module further comprises a texture analysis unit to analyse the texture consistency within the suspected forgery areas to support the validation process. The output module is further configured to generate a report summarizing the characteristics of the detected forgery, including the location, size, and anomaly score of the suspected areas within the image.
The present disclosure provides a method for detecting copy-move forgery in low-quality images. Said method includes the step of receiving an input image suspected of forgery in a data collection module. Further steps involve converting the input image to grayscale in a data preprocessing module, applying a Non-Local Means Denoising filter in the data preprocessing module to remove image noise from the input image, and enhancing the input image using Contrast Limited Adaptive Histogram Equalization (CLAHE) in the data preprocessing module. Moreover, a set of deep features from the pre-processed image is extracted using a pre-trained Convolutional Neural Network (CNN) in a deep feature extraction module.
Furthermore, a feature matching module employs a Siamese Network architecture to compare features across regions of the image for identifying distinctions and similarities indicative of copy-move forgery. Moreover, the method includes segmenting the image into super pixels at multiple scales in an adaptive multi-scale segmentation module to facilitate analysis of macro and microstructures within the image. A dynamic thresholding and match filtering module adjusts thresholds for match acceptance based on confidence scores and geometric consistency of matched regions. Furthermore, refined matches are grouped using spectral clustering to localize forgeries within the image in a forgery localization with cluster analysis module.
Moreover, suspected forgery areas are validated using a CNN classifier in a validation and enhancement module. The exact boundaries of said areas are delineated using edge-detection algorithms in the validation and enhancement module. The method concludes with the display of the image with highlighted suspected areas of copy-move forgery in an output module. Said approach can maintain accurate identification and localization of forgery in images, leveraging advanced image processing and machine learning techniques to enhance the reliability of forgery detection.

Brief Description of the Drawings

The features and advantages of the present disclosure would be more clearly understood from the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates a system for detecting copy-move forgery in low-quality images, in accordance with the embodiments of the present disclosure.
FIG. 2 illustrates a method for detecting copy-move forgery in low-quality images, in accordance with the embodiments of the present disclosure.
FIG. 3 illustrates a working decision flow diagram (DFD) for a method of detecting copy-move forgery in images, in accordance with the embodiments of the present disclosure.
FIG. 4 illustrates a workflow for a method of detecting copy-move forgery in images.

Detailed Description
In the following detailed description of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to claim those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof.
The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
Pursuant to the "Detailed Description" section herein, whenever an element is explicitly associated with a specific numeral for the first time, such association shall be deemed consistent and applicable throughout the entirety of the "Detailed Description" section, unless otherwise expressly stated or contradicted by the context.
The present disclosure pertains to a system 100 for detecting copy-move forgery in low-quality images comprises various modules arranged for executing specific tasks within the process of forgery detection. According to a pictorial illustration of FIG. 1, showcasing an architectural paradigm of the system 100 that can comprise functional elements, yet not limited to a data collection module 102, a data preprocessing module 104, a deep feature extraction module 106, a feature matching module 108, an adaptive multi-scale segmentation module 110, a dynamic thresholding and match filtering module 112, a forgery localization with cluster analysis module 114, a validation and enhancement module 116, and an output module 118. A person ordinarily skilled in art would prefer those elements or components of the system 100, to be functionally or operationally coupled to/ with each other, in accordance with the embodiments of present disclosure.
In an embodiment, the data collection module 102 is configured to receive an input image suspected of forgery. The significance of said data collection module 102 lies in the ability to initiate the forgery detection process by acquiring images for analysis. Said initial step is important for the subsequent analysis and detection of forgery within the image.
In an embodiment, following the collection of the input image, the data preprocessing module 104 undertakes the task of converting said input image to grayscale, applying a Non-Local Means Denoising filter to remove image noise, and enhancing said input image using Contrast Limited Adaptive Histogram Equalization (CLAHE). Said preprocessing steps are fundamental in preparing the image for deeper analysis by improving image quality and enhancing features critical for the accurate forgery detection.
In an embodiment, the deep feature extraction module 106, comprising a pre-trained Convolutional Neural Network (CNN), is responsible for extracting a set of deep features from the pre-processed image. The extracted deep features are instrumental in identifying unique characteristics of the image that may indicate the presence of forgery. The utilization of a pre-trained CNN enables the extraction of robust features that are essential for accurate forgery detection.
In an embodiment, the feature matching module 108 utilizes a Siamese Network architecture to compare features across regions of the image for identification of distinctions and similarities indicative of copy-move forgery. Said comparison is vital for the detection process, by allowing for the identification of areas within the image that have been tampered.
In an embodiment, the adaptive multi-scale segmentation module 110 is configured to segment the input image into super pixels at multiple scales. Said segmentation facilitates the analysis of macro and microstructures within the image, enabling a more detailed examination of the image for the forgery.
In an embodiment, the dynamic thresholding and match filtering module 112 adjusts thresholds for match acceptance based on confidence scores and geometric consistency of matched regions. Said adjustment is important for distinguishing between genuine and forged regions within the image by allowing that only matches meeting certain criteria are considered.
In an embodiment, the forgery localization with cluster analysis module 114 groups refined matches using spectral clustering to localize forgeries within the image. Said localization is key to pinpointing the exact areas of the image that have been subject to forgery.
In an embodiment, the validation and enhancement module 116 comprises a CNN classifier to validate suspected forgery areas and edge-detection algorithms to delineate the exact boundaries of said areas. Said validation and enhancement module 116 is crucial for confirming the presence of forgery and for accurately defining the boundaries of forged areas for clear visualization.
In an embodiment, the output module 118 is configured to display the input image with the highlighted suspected areas of copy-move forgery. The output provided by said module enables users to visually identify the areas of the image that have been manipulated, facilitating further analysis or corrective action.
Each module within the system 100 plays a key role in the process of detecting copy-move forgery in low-quality images. The coordinated functioning of said modules can maintain an approach to forgery detection, from initial image collection to the final visualization of forged areas. The architecture of the system 100 reflects a thorough understanding of the challenges associated with detecting copy-move forgery, especially in low-quality images, and provides a robust solution to said complex problem.
In an embodiment, the data preprocessing module 104 of the system 100 for detecting copy-move forgery in low-quality images is further configured to perform chromatic aberration correction on said input image to correct colour distortions before converting said input image to grayscale. The incorporation of chromatic aberration correction enhances the accuracy of forgery detection by maintaining that colour distortions, which may obscure or alter features of the image, are corrected prior to further processing. Said correction is pivotal for maintaining the integrity of the image features during the grayscale conversion, thereby improving the reliability of subsequent forgery detection processes.
In another embodiment, the deep feature extraction module 106 of the system 100 utilizes said Convolutional Neural Network (CNN) selected from the group consisting of VGG16, ResNet, and Inception-v3, each being optimized for capturing distinct image features relevant to forgery detection. The selection among VGG16, ResNet, and Inception-v3 allows for the customization of the feature extraction process based on the specific characteristics of the input image, thereby enhancing the ability of the system 100 to detect forgeries by leveraging the strengths of each CNN architecture. Said optimization can facilitate an analysis of the image, allowing for the identification of subtle forgeries that might be overlooked using a one-size-fits-all approach.
In a further embodiment, the feature matching module 108 of the system 100 comprises a similarity assessment protocol configured for the identification of similarities between features using a threshold derived from the training phase of said Siamese Network. Said approach enables a more nuanced comparison of image features, thereby improving the accuracy of forgery detection. The use of a trained threshold allows for the adjustment of sensitivity based on prior learning, which in turn minimizes false positives and negatives, crucial for the reliability of the system 100 in practical applications.
In yet another embodiment, the adaptive multi-scale segmentation module 110 of the system 100 employs Simple Linear Iterative Clustering (SLIC) to generate super pixels, with the scale of segmentation being adjustable based on the resolution and quality of said input image. Said adaptability facilitates a more precise analysis of the image by allowing for segmentation that is tailored to the specific quality and resolution of the input image, thereby enhancing the ability of the system 100 to detect forgeries across a wide range of image qualities and resolutions.
In a further embodiment, the dynamic thresholding and match filtering module 112 of the system 100 includes a learning-based component to update the threshold criteria using feedback from past forgery detection results. Said learning-based approach allows the system 100 to evolve and improve over time, enhancing the effectiveness in detecting forgeries by refining the criteria used for match acceptance. Said continuous learning process facilitates that the system 100 remains effective even as forgery techniques evolve.
In another embodiment, the forgery localization with cluster analysis module 114 of the system 100 utilizes a machine learning clustering algorithm selected from the group consisting of k-means, hierarchical clustering, and Gaussian mixture models to refine the grouping of matched features. The flexibility in selecting among said clustering algorithms allows the system 100 to employ the most effective technique for grouping features based on the specific characteristics of the input image, thereby improving the precision of forgery localization.
In a further embodiment, the validation and enhancement module 116 of the system 100 comprises a texture analysis unit to analyse the texture consistency within the suspected forgery areas to support the validation process. Said analysis of texture consistency is crucial for validating the presence of forgery by identifying discrepancies in texture that are indicative of manipulation, thereby enhancing the accuracy of the validation process.
In yet another embodiment, the output module 118 of the system 100 is further configured to generate a report summarizing the characteristics of the detected forgery, including the location, size, and anomaly score of the suspected areas within said image. Said report provides an overview of the detected forgery, facilitating a deeper understanding of the forgery extent and nature, which is essential for further investigation or corrective action.
In an exemplary embodiment, the communication network interface can be arranged to functionally or operationally interlink the elements of the system 100, with each other. Non-limiting examples of communication network interface may include a short-range communication network interface and/or long-range communication network interface. The short-range communication network interface may include Wi-Fi, Bluetooth low energy (BLE), Zigbee, and the like. Similarly, the long-range communication network interface may include Local Area Network (LAN), Metropolitan Area Network (MAN), Wide Area Network (WAN), a cloud computing platform, a data centre, Internet of Things (IoT), light fidelity (LiFi) and the like.
Referring to one or more preceding embodiments, the embodiments of proposed disclosure, may work well with any or a combination of aforementioned networks. The communication network interface may incorporate any or a combination of wired or wireless communication mechanisms that can be performed through various computer networking protocols. The computer networking protocol may include Asynchronous Transfer Mode (ATM), Transmission Control Protocol/Internet Protocol (TCP/IP), Ethernet management, Simple Mail Transfer Protocol (SMTP), and security, such as Secure Shell (SSH), Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP) and User Datagram Protocol (UDP). Moreover, any other suitable protocols using voice, video, data, or combinations thereof, can also be employed.
Disclosed herein a method 200 for detecting copy-move forgery in low-quality images. Referring to a diagrammatic depiction put forth in FIG. 2, representing a flow diagram of the method 200 that can comprise steps of, yet not restricted to, (at step 202) receiving an input image, (at step 204) converting said input image to grayscale, (at step 206) applying a Non-Local Means Denoising filter to remove image noise, (at step 208) enhancing said input image using Contrast Limited Adaptive Histogram Equalization (CLAHE), (at step 210) extracting a set of deep features from said pre-processed image, (at step 212) employing a Siamese Network architecture to compare features across regions of said image.
In an embodiment, said method 200 further comprises (at step 214) segmenting said image into super pixels at multiple scales, (at step 216) adjusting thresholds for match acceptance based on confidence scores and geometric consistency, (at step 218) grouping refined matches using spectral clustering, (at step 220) validating suspected forgery areas, (at step 222) delineating the exact boundaries of said areas using edge-detection algorithms, and (at step 224) displaying said image with highlighted suspected areas of copy-move forgery. Said steps of the method 200 can be performed or executed, collectively or selectively, randomly, or sequentially or in a combination thereof, in accordance with the embodiments of current disclosure.
In an embodiment, the method 200 commences with the step 202 of receiving an input image suspected of forgery in a data collection module 102. Said initial step 202 is pivotal in setting the foundation for the forgery detection process by securing the input necessary for subsequent analysis. By focusing on images suspected of forgery, the method 200 is tailored to address specific concerns regarding image authenticity, thereby serving as a important starting point for the detection workflow.
In an embodiment, at step 204, said input image is converted to grayscale in a data preprocessing module 104. Said conversion is crucial for simplifying the image analysis by reducing the complexity associated with color information. The transformation into grayscale facilitates a more focused analysis on the texture and intensity variations, which are essential for detecting subtle manipulations indicative of copy-move forgery.
In an embodiment, following the conversion to grayscale, at step 206, a Non-Local Means Denoising filter is applied in said data preprocessing module 104 to remove image noise from said input image. Said denoising step 206 is imperative for enhancing the clarity of the image by eliminating noise that could obscure important details or mimic features of forgery, thus improving the accuracy of the forgery detection process.
In an embodiment, at step 208, said input image undergoes enhancement using Contrast Limited Adaptive Histogram Equalization (CLAHE) in said data preprocessing module 104. Said enhancement technique is vital for improving the visibility of features within the image by adjusting the contrast. The CLAHE technique facilitates that contrast enhancement is evenly distributed, preventing over-amplification of noise, which is particularly beneficial for low-quality images where details are crucial for forgery detection.
In an embodiment, at step 210, a set of deep features is extracted from said pre-processed image using a pre-trained Convolutional Neural Network (CNN) in a deep feature extraction module 106. Said extraction is key to identifying complex patterns and features within the image that are not discernible through traditional analysis techniques. Utilizing a pre-trained CNN leverages advanced machine learning techniques to pinpoint features indicative of forgery, thereby enhancing the detection capability of the method 200.
In an embodiment, employing a Siamese Network architecture at step 212 in a feature matching module 108 compares features across regions of said image to identify distinctions and similarities indicative of copy-move forgery. Said step 212 is integral to the method 200 in enabling a precise comparison of image regions to uncover duplications or alterations, leveraging the ability of Siamese Network to learn from similarities and differences to accurately flag the forgeries.
In an embodiment, at step 214, said image is segmented into super pixels at multiple scales in an adaptive multi-scale segmentation module 110 to facilitate analysis of macro and microstructures within said image. Said segmentation allows for a granular analysis of the image, enabling the detection system 100 to examine various layers of detail, from broad patterns to intricate structures, thereby improving the detection of the forgery attempts.
In an embodiment, adjusting thresholds for match acceptance based on confidence scores and geometric consistency of matched regions occurs at step 216 in a dynamic thresholding and match filtering module 112. Said adjustment is important for maintaining that only regions with a high likelihood of forgery are flagged, thereby reducing false positives and enhancing the reliability of the forgery detection process.
In an embodiment, at step 218, refined matches are grouped using spectral clustering to localize forgeries within said image in a forgery localization with cluster analysis module 114. Said localization is essential for pinpointing the specific areas of the image that have been manipulated, facilitating a targeted analysis of suspected forgery regions.
In an embodiment, validating suspected forgery areas using a CNN classifier occurs at step 220 in a validation and enhancement module 116. Said validation is crucial for confirming the presence of forgery with a high degree of accuracy, leveraging advanced classification techniques to distinguish between genuine and forged regions.
In an embodiment, at step 222, the exact boundaries of said areas are delineated using edge-detection algorithms in said validation and enhancement module 116. Said delineation is key to clearly defining the contours of forgery areas, which is essential for detailed analysis and documentation of the forgery.
In an embodiment, at step 224, said image with highlighted suspected areas of copy-move forgery is displayed in an output module 118. Said display is instrumental in conveying the results of the forgery detection process, providing visual evidence of the analyzed and identified forgery areas, thereby completing the method 200 for detecting copy-move forgery in low-quality images.
FIG. 3 illustrates a working decision flow diagram (DFD) for a method of detecting copy-move forgery in images. Said DFD begins with an input image that undergoes preprocessing to prepare the image for analysis. Next, deep features are extracted from the pre-processed image, which are then analysed using a Siamese Network for feature matching to identify the forgeries. The image is then segmented at multiple scales to aid in the detailed examination of the structures.
Referring to one or more preceding embodiments, the dynamic thresholding and match filtering follow, where thresholds are adjusted so that matches are confident. If the match is confident, the method moves to forgery localization with cluster analysis. Said localization step groups the matched features to localize areas of forgery within the image. Subsequently, the suspected areas are validated and enhanced to confirm the forgery, and the exact boundaries are delineated. The final step involves displaying the image with highlighted suspected areas of forgery. If at any point the match is not confident, the process does not proceed to forgery localization but instead concludes the method, indicating that a forgery was not confidently detected.
FIG. 4 showcases a workflow for a method of detecting copy-move forgery in images. An input image is taken as the starting point. Preprocessing is applied to the input image to prepare said image for analysis. Deep feature extraction involves analyzing the pre-processed image to identify complex characteristics. Feature matching with Siamese network involves using a specialized neural network to compare image features for the forgery indications. Adaptive multiscale segmentation segments the image into parts at various scales for detailed examination. Dynamic thresholding & match filtering applies variable criteria to the matched features to determine the likelihood of forgery. Forgery localization with cluster analysis identifies the specific areas of the image that may have been altered. Validation & enhancement assesses the detected areas to confirm said areas are indeed forgeries and refines the visualization. The process culminates with an output, which is the final image with the highlighted identified forgeries.
Said system 100 operates in the field of digital image forensics, focusing on detecting copy-move forgery in low-quality images. Said forgery, where parts of an image are copied and pasted to alter the scene, challenges the authenticity verification, particularly with low-resolution images. The system 100 employs an advanced dynamic thresholding technique, improving precision and reliability over existing methods.
Referring to one or more preceding embodiments, the method 200 that integrates deep learning, adaptive segmentation, and dynamic thresholding. The method 200 utilizes pre-trained Convolutional Neural Networks (CNNs) for feature extraction and a Siamese Network for feature matching, enabling the detection of subtle forgeries. Adaptive multi-scale segmentation and dynamic thresholding adjust the sensitivity to the specific characteristics of low-quality images, while cluster analysis and additional validation steps provide an approach to forgery detection.
Referring to one or more preceding embodiments, the system 100 is particularly relevant in the digital landscape where image editing tools are widely available, making robust forgery detection tools essential. This technique accounts for common challenges associated with image quality, offering a valuable resource for digital forensics professionals. By enhancing the detection of copy-move forgery, the invention supports the credibility of digital imagery, upholds truth, and combats misinformation.
Referring to one or more preceding embodiments, the system 100 addresses the technical problem of detecting copy-move forgery in low-quality images, where traditional methods fall short due to issues like noise and low resolution. Said system 100 combines deep learning, adaptive segmentation, and advanced thresholding to identify manipulated areas with high accuracy. The system 100 effectively compares features, adjusts to the image quality, reduces false positives, and localizes forgeries with precision, making the system 100 a solution for maintaining the integrity of digital images in various applications.
Example embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatuses. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including hardware, software, firmware, and a combination thereof. For example, in one embodiment, each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations can be implemented by computer program instructions. These computer program instructions may be loaded onto a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
Throughout the present disclosure, the term ‘processing means’ or ‘microprocessor’ or ‘processor’ or ‘processors’ includes, but is not limited to, a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
The term “non-transitory storage device” or “storage” or “memory,” as used herein relates to a random access memory, read only memory and variants thereof, in which a computer can store data or software for any duration.
Operations in accordance with a variety of aspects of the disclosure is described above would not have to be performed in the precise order described. Rather, various steps can be handled in reverse order or simultaneously or not at all.
While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.

Claims

I/We claims:

A system 100 for detecting copy-move forgery in low-quality images, the system 100 comprising:
a data collection module 102 configured to receive an input image suspected of forgery;
a data preprocessing module 104 configured to:
convert said input image to grayscale,
apply a Non-Local Means Denoising filter to remove image noise; and
enhance said input image using Contrast Limited Adaptive Histogram Equalization (CLAHE);
a deep feature extraction module 106 comprising a pre-trained Convolutional Neural Network (CNN) for extracting a set of deep features from said pre-processed image;
a feature matching module 108 utilizes a Siamese Network architecture to compare features across regions of said image for identification of distinctions and similarities indicative of copy-move forgery;
an adaptive multi-scale segmentation module 110 configured to segment said input image into super pixels at multiple scales to facilitate analysis of macro and microstructures within said image;
a dynamic thresholding and match filtering module 112 configured to adjust thresholds for match acceptance based on confidence scores and geometric consistency of matched regions;
a forgery localization with cluster analysis module 114 for grouping refined matches using spectral clustering to localize forgeries within the image;
a validation and enhancement module 116 comprising:
a CNN classifier to validate suspected forgery areas; and
edge-detection algorithms to delineate the exact boundaries of said areas; and
an output module 118 configured to display the input image with the highlighted suspected areas of said copy-move forgery.
The system 100 of claim 1, wherein the data preprocessing module 104 is further configured to perform chromatic aberration correction on said input image to correct colour distortions before converting said input image to grayscale.
The system 100 of claim 1, wherein the deep feature extraction module 106 utilizes said CNN selected from the group consisting of VGG16, ResNet, and Inception-v3, wherein each CNN being optimized for capturing distinct image features relevant to forgery detection.
The system 100 of claim 1, wherein the feature matching module 108 further comprises a similarity assessment protocol configured for said identification of the similarities between features using a threshold derived from the training phase of said Siamese Network.
The system 100 of claim 1, wherein the adaptive multi-scale segmentation module 110 employs Simple Linear Iterative Clustering (SLIC) to generate super pixels, wherein the scale of segmentation is adjustable based on the resolution and quality of said input image.
The system 100 of claim 1, wherein the dynamic thresholding and match filtering module 112 further includes a learning-based component to update the threshold criteria using feedback from past forgery detection results.
The system 100 of claim 1, wherein the forgery localization with cluster analysis module 114 utilizes a machine learning clustering algorithm selected from the group consisting of k-means, hierarchical clustering, and Gaussian mixture models to refine the grouping of matched features.
The system 100 of claim 1, wherein the validation and enhancement module 116 further comprises a texture analysis unit to analyse the texture consistency within the suspected forgery areas to support the validation process.
The system 100 of claim 1, wherein the output module 118 is further configured to generate a report summarizing the characteristics of the detected forgery, including the location, size, and anomaly score of the suspected areas within said image.
A method 200 for detecting copy-move forgery in low-quality images, the method 200 comprising the steps of:
(at step 202) receiving an input image suspected of forgery in a data collection module 102;
(at step 204) converting said input image to grayscale in a data preprocessing module 104;
(at step 206) applying a Non-Local Means Denoising filter in said data preprocessing module 104 to remove image noise from said input image;
(at step 208) enhancing said input image using Contrast Limited Adaptive Histogram Equalization (CLAHE) in said data preprocessing module 104;
(at step 210) extracting a set of deep features from said pre-processed image using a pre-trained Convolutional Neural Network (CNN) in a deep feature extraction module 106;
(at step 212) employing a Siamese Network architecture in a feature matching module 108 to compare features across regions of said image for identifying distinctions and similarities indicative of copy-move forgery;
(at step 214) segmenting said image into super pixels at multiple scales in an adaptive multi-scale segmentation module 110 to facilitate analysis of macro and microstructures within said image;
(at step 216) adjusting thresholds for match acceptance based on confidence scores and geometric consistency of matched regions in a dynamic thresholding and match filtering module 112;
(at step 218) grouping refined matches using spectral clustering to localize forgeries within said image in a forgery localization with cluster analysis module 114;
(at step 220) validating suspected forgery areas using a CNN classifier in a validation and enhancement module 116;
(at step 222) delineating the exact boundaries of said areas using edge-detection algorithms in said validation and enhancement module 116; and
(at step 224) displaying said image with highlighted suspected areas of copy-move forgery in an output module 118.

DYNAMIC THRESHOLDING TECHNIQUE FOR COPY-MOVE FORGERY DETECTION IN LOW-QUALITY IMAGES

Disclosed a system for detecting copy-move forgeries in low-quality images, comprising modules for data collection, preprocessing, feature extraction, and feature matching. Utilizing a combination of grayscale conversion, denoising, and contrast enhancement, the system prepares images for deep learning analysis. A Convolutional Neural Network extracts features, which are then compared using a Siamese Network to identify forgeries. Multi-scale segmentation, dynamic thresholding, and cluster analysis localize the forgery, while validation and boundary delineation confirm and define the altered areas. The system outputs the final image, marking the detected forgeries, thus maintaining digital image integrity.

Drawings
/ Fig. 1
/ Fig. 2

/Fig. 3
/ Fig. 4

, Claims:I/We claims:

A system 100 for detecting copy-move forgery in low-quality images, the system 100 comprising:
a data collection module 102 configured to receive an input image suspected of forgery;
a data preprocessing module 104 configured to:
convert said input image to grayscale,
apply a Non-Local Means Denoising filter to remove image noise; and
enhance said input image using Contrast Limited Adaptive Histogram Equalization (CLAHE);
a deep feature extraction module 106 comprising a pre-trained Convolutional Neural Network (CNN) for extracting a set of deep features from said pre-processed image;
a feature matching module 108 utilizes a Siamese Network architecture to compare features across regions of said image for identification of distinctions and similarities indicative of copy-move forgery;
an adaptive multi-scale segmentation module 110 configured to segment said input image into super pixels at multiple scales to facilitate analysis of macro and microstructures within said image;
a dynamic thresholding and match filtering module 112 configured to adjust thresholds for match acceptance based on confidence scores and geometric consistency of matched regions;
a forgery localization with cluster analysis module 114 for grouping refined matches using spectral clustering to localize forgeries within the image;
a validation and enhancement module 116 comprising:
a CNN classifier to validate suspected forgery areas; and
edge-detection algorithms to delineate the exact boundaries of said areas; and
an output module 118 configured to display the input image with the highlighted suspected areas of said copy-move forgery.
The system 100 of claim 1, wherein the data preprocessing module 104 is further configured to perform chromatic aberration correction on said input image to correct colour distortions before converting said input image to grayscale.
The system 100 of claim 1, wherein the deep feature extraction module 106 utilizes said CNN selected from the group consisting of VGG16, ResNet, and Inception-v3, wherein each CNN being optimized for capturing distinct image features relevant to forgery detection.
The system 100 of claim 1, wherein the feature matching module 108 further comprises a similarity assessment protocol configured for said identification of the similarities between features using a threshold derived from the training phase of said Siamese Network.
The system 100 of claim 1, wherein the adaptive multi-scale segmentation module 110 employs Simple Linear Iterative Clustering (SLIC) to generate super pixels, wherein the scale of segmentation is adjustable based on the resolution and quality of said input image.
The system 100 of claim 1, wherein the dynamic thresholding and match filtering module 112 further includes a learning-based component to update the threshold criteria using feedback from past forgery detection results.
The system 100 of claim 1, wherein the forgery localization with cluster analysis module 114 utilizes a machine learning clustering algorithm selected from the group consisting of k-means, hierarchical clustering, and Gaussian mixture models to refine the grouping of matched features.
The system 100 of claim 1, wherein the validation and enhancement module 116 further comprises a texture analysis unit to analyse the texture consistency within the suspected forgery areas to support the validation process.
The system 100 of claim 1, wherein the output module 118 is further configured to generate a report summarizing the characteristics of the detected forgery, including the location, size, and anomaly score of the suspected areas within said image.
A method 200 for detecting copy-move forgery in low-quality images, the method 200 comprising the steps of:
(at step 202) receiving an input image suspected of forgery in a data collection module 102;
(at step 204) converting said input image to grayscale in a data preprocessing module 104;
(at step 206) applying a Non-Local Means Denoising filter in said data preprocessing module 104 to remove image noise from said input image;
(at step 208) enhancing said input image using Contrast Limited Adaptive Histogram Equalization (CLAHE) in said data preprocessing module 104;
(at step 210) extracting a set of deep features from said pre-processed image using a pre-trained Convolutional Neural Network (CNN) in a deep feature extraction module 106;
(at step 212) employing a Siamese Network architecture in a feature matching module 108 to compare features across regions of said image for identifying distinctions and similarities indicative of copy-move forgery;
(at step 214) segmenting said image into super pixels at multiple scales in an adaptive multi-scale segmentation module 110 to facilitate analysis of macro and microstructures within said image;
(at step 216) adjusting thresholds for match acceptance based on confidence scores and geometric consistency of matched regions in a dynamic thresholding and match filtering module 112;
(at step 218) grouping refined matches using spectral clustering to localize forgeries within said image in a forgery localization with cluster analysis module 114;
(at step 220) validating suspected forgery areas using a CNN classifier in a validation and enhancement module 116;
(at step 222) delineating the exact boundaries of said areas using edge-detection algorithms in said validation and enhancement module 116; and
(at step 224) displaying said image with highlighted suspected areas of copy-move forgery in an output module 118.

DYNAMIC THRESHOLDING TECHNIQUE FOR COPY-MOVE FORGERY DETECTION IN LOW-QUALITY IMAGES

Documents

Application Documents

# Name Date
1 202421033109-OTHERS [26-04-2024(online)].pdf 2024-04-26
2 202421033109-FORM FOR SMALL ENTITY(FORM-28) [26-04-2024(online)].pdf 2024-04-26
3 202421033109-FORM 1 [26-04-2024(online)].pdf 2024-04-26
4 202421033109-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-04-2024(online)].pdf 2024-04-26
5 202421033109-EDUCATIONAL INSTITUTION(S) [26-04-2024(online)].pdf 2024-04-26
6 202421033109-DRAWINGS [26-04-2024(online)].pdf 2024-04-26
7 202421033109-DECLARATION OF INVENTORSHIP (FORM 5) [26-04-2024(online)].pdf 2024-04-26
8 202421033109-COMPLETE SPECIFICATION [26-04-2024(online)].pdf 2024-04-26
9 202421033109-FORM-9 [07-05-2024(online)].pdf 2024-05-07
10 202421033109-FORM 18 [08-05-2024(online)].pdf 2024-05-08
11 202421033109-FORM-26 [12-05-2024(online)].pdf 2024-05-12
12 202421033109-FORM 3 [13-06-2024(online)].pdf 2024-06-13
13 202421033109-RELEVANT DOCUMENTS [01-10-2024(online)].pdf 2024-10-01
14 202421033109-POA [01-10-2024(online)].pdf 2024-10-01
15 202421033109-FORM 13 [01-10-2024(online)].pdf 2024-10-01
16 202421033109-FER.pdf 2025-07-24
17 202421033109-FORM-8 [29-10-2025(online)].pdf 2025-10-29
18 202421033109-FORM-26 [29-10-2025(online)].pdf 2025-10-29
19 202421033109-FER_SER_REPLY [29-10-2025(online)].pdf 2025-10-29
20 202421033109-DRAWING [29-10-2025(online)].pdf 2025-10-29
21 202421033109-CORRESPONDENCE [29-10-2025(online)].pdf 2025-10-29
22 202421033109-COMPLETE SPECIFICATION [29-10-2025(online)].pdf 2025-10-29
23 202421033109-CLAIMS [29-10-2025(online)].pdf 2025-10-29
24 202421033109-ABSTRACT [29-10-2025(online)].pdf 2025-10-29

Search Strategy

1 202421033109_SearchStrategyNew_E_searchE_21-05-2025.pdf