Sign In to Follow Application
View All Documents & Correspondence

Real Time Detection Of Manipulated Digital Images Using Ai

Abstract: A method and system are disclosed for detecting manipulations in digital images through real-time analysis. The method involves receiving an upload of a digital image via a user interface, transmitting the image to a server, and analyzing the image using an AI system trained on a comprehensive dataset of authentic and manipulated images. The determination of the image as either a deepfake or authentic is made by a machine learning module, with the result communicated back to the user. The system includes a web-based platform for image upload, a machine learning module on a server, a database of images for AI training, and an interface for result display. This enables accurate and efficient identification of deepfake images, enhancing digital content authenticity verification. Drawings / FIG. 1 / FIG. 2 / FIG. 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
26 April 2024
Publication Number
23/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

MARWADI UNIVERSITY
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
HARSHIT KASHYAP
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
ANKUR MANI
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
KHANJAN DAMANI
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
KUSHAGRA PANDYA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
NEEL DHOLAKIA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
HENSI PATEL
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
KULDEEP DAVE
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
PROF. AKSHAY RANPARIYA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
DR. MADHU SHUKLA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
PROF. VIPUL LADVA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA

Inventors

1. HARSHIT KASHYAP
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
2. ANKUR MANI
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
3. KHANJAN DAMANI
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
4. KUSHAGRA PANDYA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
5. NEEL DHOLAKIA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
6. HENSI PATEL
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
7. KULDEEP DAVE
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
8. PROF. AKSHAY RANPARIYA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
9. DR. MADHU SHUKLA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
10. PROF. VIPUL LADVA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA

Specification

Description:.

REAL-TIME DETECTION OF MANIPULATED DIGITAL IMAGES USING AI

Field of the Invention

The present disclosure pertains to the field of digital image analysis, specifically to a method and system for detecting manipulations in digital images.
Background
The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
The digital era has witnessed a substantial increase in the volume of digital images being created, shared, and manipulated across various platforms. With advancements in technology, the manipulation of digital images has become more sophisticated, often making it challenging to distinguish between authentic and manipulated images. The phenomenon of "deepfake" images, where artificial intelligence and machine learning techniques are used to alter digital images in a highly realistic manner, poses significant challenges in various fields including security, media, and legal evidence.
Traditionally, methods to detect manipulated images have relied on analyzing inconsistencies in lighting, shadows, or image metadata. These methods, while useful in certain scenarios, often fall short in detecting more sophisticated manipulations, such as those created by deepfake technology. Deepfake technology leverages advanced machine learning and artificial neural networks to create or alter images in a way that is nearly indistinguishable from genuine images.
The development and application of artificial intelligence (AI) systems, particularly those involving machine learning algorithms, have shown promise in addressing the challenges posed by deepfake images. Such systems are trained on extensive datasets comprising both authentic and manipulated images, enabling them to learn and identify the subtle characteristics that differentiate genuine images from manipulated ones. The analysis conducted by these AI systems focuses on identifying inconsistencies and patterns that are not easily visible to the human eye.
Despite the advancements in AI-based detection systems, several limitations remain. The effectiveness of these systems largely depends on the diversity and size of the dataset on which they are trained. A dataset with a limited variety of images may not provide the AI system with sufficient examples of manipulations, potentially reducing its accuracy in identifying deepfake images. Additionally, the real-time detection of deepfake images presents a significant challenge, as it requires not only accurate analysis but also rapid processing to provide immediate feedback to the user.
Furthermore, the user interface through which digital images are uploaded and analyzed plays a crucial role in the overall effectiveness of the detection process. An interface that is not user-friendly or lacks efficient communication mechanisms may deter users from utilizing the detection system, thereby limiting its applicability and impact.
In light of the above discussion, there exists an urgent need for solutions that overcome the problems associated with conventional systems and/or techniques for detecting manipulated digital images. These solutions should offer improved accuracy, real-time detection capabilities, and user-friendly interfaces to enhance the verification of digital content authenticity.
Summary
The following presents a simplified summary of various aspects of this disclosure in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements nor delineate the scope of such aspects. Its purpose is to present some concepts of this disclosure in a simplified form as a prelude to the more detailed description that is presented later.
The following paragraphs provide additional support for the claims of the subject application.
In an aspect, the present disclosure aims to provide a method for detecting digital image manipulations by employing a user interface for the upload of digital images, which are then transmitted to a server for analysis. The analysis utilizes an AI system trained on an extensive dataset of authentic and manipulated images to determine whether the uploaded image is a deepfake or authentic. This determination is made by a machine learning module and communicated to the user through the user interface. Further enhancements include the use of various machine learning techniques such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs) for analysis.
In another aspect, the disclosure provides a system for real-time detection of deepfake images comprising a web-based platform with a user interface for uploading digital images, a machine learning module hosted on a server for image analysis, a database containing a comprehensive dataset of images for training the AI, and an output interface for displaying analysis results. Enhancements to the system include rapid feedback on image authenticity, the use of convolutional neural networks for image analysis, a summary report of analysis results including probability scores, a preprocessing module for image quality enhancement, options for users to report suspected deepfakes, and comparative visuals between uploaded and authentic images to aid in detection.

Brief Description of the Drawings

The features and advantages of the present disclosure would be more clearly understood from the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates a method (100) for the detection of manipulations within digital images, in accordance with the embodiments of the present disclosure.
FIG. 2 illustrates a block diagram of a system (200) for detection of deepfake images, in accordance with the embodiments of the present disclosure.
FIG. 3 presents a flowchart that outlines the operational process of a system to analyze the authenticity of a photo.

Detailed Description
In the following detailed description of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to claim those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof.
The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
Pursuant to the "Detailed Description" section herein, whenever an element is explicitly associated with a specific numeral for the first time, such association shall be deemed consistent and applicable throughout the entirety of the "Detailed Description" section, unless otherwise expressly stated or contradicted by the context.
FIG. 1 illustrates a method (100) for the detection of manipulations within digital images, in accordance with the embodiments of the present disclosure. The method (100) is characterized by several critical steps aimed at accurately distinguishing between authentic and altered images. In step (102), the method introduces a user interface specifically designed to facilitate the upload of digital images by users, ensuring an accessible and straightforward entry point for the evaluation process. Following the image upload, the next step (104) involves the transmission of the digital image to a server where it is poised for further analysis. The method innovation lies in the subsequent analysis phase step (106), where the uploaded image is examined by an artificial intelligence (AI) system. This AI system is not generic but is finely tuned and trained on an extensive and diverse dataset, which includes a wide range of authentic images as well as deepfake images, enabling it to recognize and differentiate with high precision. The determination of the image's authenticity or manipulation is conducted by a specialized machine learning module (204) in step (108), which employs the insights and characteristics learned from the aforementioned extensive dataset. This critical evaluation phase is designed to assess whether the uploaded image is a product of digital manipulation (a deepfake) or if it remains untampered and authentic. In step (110), the culmination of the method (100) is achieved through the communication of the determination results back to the user. This communication is facilitated through the same user interface used at the method's onset, providing a seamless and integrated user experience from upload to result reception. This method's design underscores a robust and user-oriented approach to the pressing challenge of identifying and verifying the authenticity of digital images in the modern digital landscape.
In an embodiment, the method (100) for detecting digital image manipulations, specifically focusing on the operational intricacies of the machine learning module (204). This claim delineates the advanced methodologies employed by the machine learning module to analyze and determine the authenticity of digital images, specifying that the module utilizes a selection of sophisticated machine learning techniques. These techniques include Convolutional Neural Networks (CNNs), known for their efficacy in processing visual imagery by recognizing patterns and features that define the image's authenticity; Recurrent Neural Networks (RNNs), which excel in analyzing data where context or sequential order significantly influences the outcome, thus beneficial in identifying subtle manipulations that alter the natural sequence or structure within images; and Generative Adversarial Networks (GANs), a technique pivotal for understanding and detecting deepfake images, as GANs themselves are often employed in the creation of such manipulations, thereby offering a countermeasure that is uniquely suited to identify and negate their own breed of sophistication.
The term "system" as used throughout the present disclosure relates to an integrated assembly of components designed to detect manipulations in digital images in real time. This system encompasses a web-based platform equipped with a user interface for the uploading of digital images by users, a machine learning module hosted on a server and linked to the user interface for efficient image analysis, a database containing an extensive dataset of authentic and manipulated digital images for training the AI system, and an output interface for presenting the analysis results to the users. The seamless interaction between these components enables the system to offer a robust solution for distinguishing between authentic images and those altered by sophisticated deepfake technologies. By leveraging advanced machine learning techniques and a comprehensive dataset, the system enhances the accuracy and speed of detection processes, addressing the growing need for tools capable of identifying digital image manipulations in various applications, from security to media integrity.
The term "web-based platform" as used throughout the present disclosure relates to an online platform accessible through the internet, designed to facilitate user interactions and functions related to the upload and analysis of digital images. The web-based platform features a user interface, which is crafted to enable users to upload digital images seamlessly for the purpose of detecting manipulations, thereby serving as the entry point for the detection process.
The term "machine learning module" as used throughout the present disclosure pertains to a specialized set of algorithms and computational processes hosted on a server. This module is responsible for analyzing the uploaded digital images to determine their authenticity. The machine learning module is communicatively coupled to the user interface, ensuring that images uploaded through the web-based platform are promptly transmitted to the server for analysis. This coupling facilitates the real-time processing and analysis of digital images, critical for the detection of deepfake images.
The term "database" as used throughout the present disclosure refers to a structured collection of data that contains an extensive dataset of both authentic and manipulated digital images. This database serves as the foundational element for training the AI system, enabling the machine learning module to learn from a broad spectrum of examples. The inclusion of both authentic and manipulated images in the dataset is crucial for the AI system to accurately differentiate between genuine and altered content.
The term "output interface" as used throughout the present disclosure denotes a component of the system designed to display the results of the analysis performed by the AI system. Following the analysis of uploaded images by the machine learning module, the output interface presents the findings to the users. This interface plays a pivotal role in communicating the determination of whether an image is authentic or has been manipulated, thereby completing the detection process.
FIG. 2 illustrates a block diagram of a system (200) for detection of deepfake images, in accordance with the embodiments of the present disclosure. In said system, a web-based platform (202) serves as the primary interface for user interaction, where digital images are uploaded for analysis. The web-based platform (202) is designed to enable users to receive timely feedback regarding the authenticity of the images submitted, thereby facilitating an efficient user experience. Integrated within said system is a machine learning module (204) that utilizes convolutional neural networks for the analysis of the uploaded digital images. Said machine learning module (204) is hosted on a server and is in communicative coupling with the web-based platform (202), allowing for the seamless transmission and processing of image data. A database (206), also included in the system, contains a vast dataset of authentic and manipulated images. The dataset is utilized for the training of the AI system, enhancing the capability of said machine learning module (204) to accurately identify manipulated images. An output interface (208) completes the configuration of the system (200). The interface is tasked with providing users a summary report which includes probability scores indicating the likelihood of manipulation in the analyzed images. Furthermore, said output interface (208) is equipped to display comparative visuals, aiding users in recognizing discrepancies between uploaded images and similar authentic images from the database (206). Additionally, the system comprises a preprocessing module which is responsible for improving the quality of images prior to analysis by the machine learning module (204), ensuring the integrity of the detection process. The user interface includes an option for reporting suspected deepfake images, which supports the enforcement of digital authenticity.
In an embodiment, the system (200) for real-time detection of deepfake images introduces several advanced functionalities aimed at enhancing user interaction and improving the accuracy of deepfake detection. Specifically, the user interface of the system is designed to provide users with feedback on the authenticity of the uploaded images within a predetermined timeframe. This feature ensures a timely response to users' inquiries, significantly improving the system's usability and effectiveness in real-time applications.
In another embodiment, the machine learning module (204) incorporated within the system employs convolutional neural networks (CNNs) for the analysis of digital images. CNNs are renowned for their proficiency in handling visual data, making them particularly suitable for identifying subtle manipulations in images that may indicate the presence of deepfake content. This choice of technology underscores the system's commitment to leveraging cutting-edge AI techniques to enhance detection accuracy.
In yet another embodiment, the output interface (208) is designed to provide a summary report to users, which includes probability scores indicating the likelihood of manipulation in the analyzed images. This quantitative feedback mechanism aids users in understanding the analysis outcome and the level of confidence in the detection of manipulations.
In an embodiment, the system also integrates a preprocessing module which enhances the quality of images before they are analyzed by the machine learning module (204). This preprocessing step is critical for ensuring that the images are in an optimal condition for analysis, thereby reducing the chances of misclassification due to poor image quality.
In another embodiment, the user interface offers an option for users to report suspected deepfake images to a monitoring authority directly through the web-based platform (202). This feature encourages user participation in the detection process and facilitates the collection of data that can further refine the system's accuracy.
In another embodiment, the output interface (208) enhances the user experience by displaying comparative visuals between the uploaded images and similar authentic images from the database (206). This visual comparison not only aids in the verification process but also educates users about the characteristics of deepfake manipulations, contributing to broader awareness and understanding of digital image authenticity.
FIG. 3 presents a flowchart that outlines the operational process of a system to analyze the authenticity of a photo. The process begins with the user uploading a photo to a website. This is the first step in the sequence, where the user interacts with a website interface to submit a photo for analysis. Once the photo is uploaded, the next step involves the website sending the photo to a Python-based service or environment. Python is a programming language commonly used for data analysis, machine learning, and running web services, among other applications. After the photo has been sent to the Python service, a model (i.e., machine learning model) takes over to evaluate whether the photo is 'Fake' or 'Real'. Machine learning models used are trained on large datasets of both authentic and manipulated images and are capable of detecting various signs of tampering or artificial generation that may indicate a photo is not genuine. If the model identifies the photo as 'Fake', the model triggers a message that is sent back to the website. This message contains the result of the analysis –the photo is fake. The communication between the Python service and the website is critical here, as it enables that the result of the complex analysis is transmitted back to the user interface. The user receives feedback from the system regarding the authenticity of the uploaded photo.
Example embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatuses. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including hardware, software, firmware, and a combination thereof. For example, in one embodiment, each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations can be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
Throughout the present disclosure, the term ‘processing means’ or ‘microprocessor’ or ‘processor’ or ‘processors’ includes, but is not limited to, a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
The term “non-transitory storage device” or “storage” or “memory,” as used herein relates to a random access memory, read only memory and variants thereof, in which a computer can store data or software for any duration.
Operations in accordance with a variety of aspects of the disclosure is described above would not have to be performed in the precise order described. Rather, various steps can be handled in reverse order or simultaneously or not at all.
While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.

Claims

I/We claim:

A method (100) for detecting digital image manipulations, the method comprising:
a. providing a user interface configured to receive an upload of a digital image from a user;
b. transmitting the uploaded digital image to a server;
c. analyzing the uploaded digital image using the AI system trained on an extensive dataset comprising authentic and deepfake images;
d. determining, by a machine learning module (204), whether the uploaded digital image is a deepfake image or an authentic image based on characteristics learned from the extensive dataset; and
e. communicating the result of the determination to the user through the user interface.
The method (100) of claim 1, wherein the machine learning module (204) utilizes machine learning technique selected from the group consisting of convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs).
A system (200) for real-time detection of deepfake images, comprising:
a. a web-based platform (202) with a user interface for uploading digital images;
b. a machine learning module (204) hosted on a server and communicatively coupled to the user interface;
c. a database (206) containing an extensive dataset of authentic and manipulated digital images for training the AI system; and
d. an output interface (208) for displaying the results of the analysis performed by the AI system.
The system (200) of claim 3, wherein the user interface allows users to receive feedback on the authenticity of the uploaded images within a specified timeframe.
The system (200) of claim 3, wherein the machine learning module (204) utilizes convolutional neural networks to analyze the digital images.
The system (200) of claim 3, wherein the output interface (208) provides a summary report including the probability scores indicating the likelihood that an image is manipulated.
The system (200) of claim 3, further comprising a preprocessing module for enhancing the quality of images before they are analyzed by the machine learning module (204).
The system (200) of claim 3, wherein the user interface includes an option for users to report suspected deepfake images to a monitoring authority through the web-based platform (202).
The system (200) of claim 3, wherein the output interface (208) displays comparative visuals between uploaded images and similar authentic images from the database (206).

REAL-TIME DETECTION OF MANIPULATED DIGITAL IMAGES USING AI

A method and system are disclosed for detecting manipulations in digital images through real-time analysis. The method involves receiving an upload of a digital image via a user interface, transmitting the image to a server, and analyzing the image using an AI system trained on a comprehensive dataset of authentic and manipulated images. The determination of the image as either a deepfake or authentic is made by a machine learning module, with the result communicated back to the user. The system includes a web-based platform for image upload, a machine learning module on a server, a database of images for AI training, and an interface for result display. This enables accurate and efficient identification of deepfake images, enhancing digital content authenticity verification.

Drawings
/
FIG. 1
/
FIG. 2
/
FIG. 3
, Claims:I/We claim:

A method (100) for detecting digital image manipulations, the method comprising:
a. providing a user interface configured to receive an upload of a digital image from a user;
b. transmitting the uploaded digital image to a server;
c. analyzing the uploaded digital image using the AI system trained on an extensive dataset comprising authentic and deepfake images;
d. determining, by a machine learning module (204), whether the uploaded digital image is a deepfake image or an authentic image based on characteristics learned from the extensive dataset; and
e. communicating the result of the determination to the user through the user interface.
The method (100) of claim 1, wherein the machine learning module (204) utilizes machine learning technique selected from the group consisting of convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs).
A system (200) for real-time detection of deepfake images, comprising:
a. a web-based platform (202) with a user interface for uploading digital images;
b. a machine learning module (204) hosted on a server and communicatively coupled to the user interface;
c. a database (206) containing an extensive dataset of authentic and manipulated digital images for training the AI system; and
d. an output interface (208) for displaying the results of the analysis performed by the AI system.
The system (200) of claim 3, wherein the user interface allows users to receive feedback on the authenticity of the uploaded images within a specified timeframe.
The system (200) of claim 3, wherein the machine learning module (204) utilizes convolutional neural networks to analyze the digital images.
The system (200) of claim 3, wherein the output interface (208) provides a summary report including the probability scores indicating the likelihood that an image is manipulated.
The system (200) of claim 3, further comprising a preprocessing module for enhancing the quality of images before they are analyzed by the machine learning module (204).
The system (200) of claim 3, wherein the user interface includes an option for users to report suspected deepfake images to a monitoring authority through the web-based platform (202).
The system (200) of claim 3, wherein the output interface (208) displays comparative visuals between uploaded images and similar authentic images from the database (206).

REAL-TIME DETECTION OF MANIPULATED DIGITAL IMAGES USING AI

Documents

Application Documents

# Name Date
1 202421033396-OTHERS [26-04-2024(online)].pdf 2024-04-26
2 202421033396-FORM FOR SMALL ENTITY(FORM-28) [26-04-2024(online)].pdf 2024-04-26
3 202421033396-FORM 1 [26-04-2024(online)].pdf 2024-04-26
4 202421033396-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-04-2024(online)].pdf 2024-04-26
5 202421033396-EDUCATIONAL INSTITUTION(S) [26-04-2024(online)].pdf 2024-04-26
6 202421033396-DRAWINGS [26-04-2024(online)].pdf 2024-04-26
7 202421033396-DECLARATION OF INVENTORSHIP (FORM 5) [26-04-2024(online)].pdf 2024-04-26
8 202421033396-COMPLETE SPECIFICATION [26-04-2024(online)].pdf 2024-04-26
9 202421033396-FORM-9 [07-05-2024(online)].pdf 2024-05-07
10 202421033396-FORM 18 [08-05-2024(online)].pdf 2024-05-08
11 202421033396-FORM-26 [13-05-2024(online)].pdf 2024-05-13
12 202421033396-FORM 3 [13-06-2024(online)].pdf 2024-06-13
13 202421033396-RELEVANT DOCUMENTS [17-04-2025(online)].pdf 2025-04-17
14 202421033396-POA [17-04-2025(online)].pdf 2025-04-17
15 202421033396-FORM 13 [17-04-2025(online)].pdf 2025-04-17