Sign In to Follow Application
View All Documents & Correspondence

Digital Media Authenticity Verification System Utilizing Genetic Encoding Techniques

Abstract: The present disclosure provides a system (100) for attributing and identifying the source of deepfake content in digital media, comprising: a preprocessing module (102) configured to receive digital content suspected of being a deepfake and to standardize, remove noise, and format said content for analysis; an encoding module (104) configured to encode the preprocessed digital content into a sequence of digital markers, wherein said sequence represents a unique genetic code of said digital content; a deepfake DNA generation module (106) configured to generate a deepfake DNA based on the encoded sequence, wherein said deepfake DNA serves as a digital fingerprint of said digital content, comprising information about the origin and characteristics of said digital content; a comparison module (108) configured to compare the complete deepfake DNA sequence against a database of known deepfake DNA sequences to identify the source of the deepfake content; and a digital footprint analysis module (110) configured to conduct a digital footprint analysis to validate the source identification and provide insights into the characteristics and origins of the deepfake content. Fig. 1 Drawings / FIG. 1 / FIG. 2 / FIG. 3 / FIG. 4

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
26 April 2024
Publication Number
23/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

MARWADI UNIVERSITY
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
RESHMA SUNIL
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
MS. PARITA MER
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
MR. PARTH PARMAR
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
MR. YOGESHWAR PRAJAPATI
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA

Inventors

1. MS. RESHMA SUNIL
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
2. MS. PARITA MER
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
3. MR. PARTH PARMAR
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
4. MR. YOGESHWAR PRAJAPATI
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA

Specification

Description:Field of the Invention

The present disclosure generally relates to digital media security. Particularly, the present disclosure relates to a system for attributing and identifying the source of deepfake content in digital media.
Background
The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
To generate the longest background format as requested, more detailed information about various existing systems and techniques used in digital content analysis, particularly in the detection and attribution of deepfake content, would be necessary. Since specific state-of-the-art methods were not provided in the initial description, a comprehensive exploration into this field will involve discussing general trends, challenges, and the need for innovation within the context of deepfake detection and attribution.
Deepfake technology, leveraging artificial intelligence, has seen rapid advancements, enabling the creation of highly realistic synthetic media. This technology allows for the alteration of a person's likeness in images or videos, resulting in content that is increasingly difficult to identify as authentic or manipulated. The sophistication of deepfakes poses significant challenges in various domains, including security, misinformation, and privacy, thereby necessitating robust methods for their detection and source attribution.
Traditional methods for deepfake detection have largely focused on identifying inconsistencies or anomalies in visual elements, such as facial features, lighting, and textures. These techniques, while effective in early iterations of deepfake technology, are becoming less reliable as the algorithms used to generate deepfakes continue to evolve. The ability of these algorithms to produce artifacts that closely mimic authentic content significantly undermines the efficacy of visual cue-based detection methods.
Furthermore, the reliance on digital watermarking and metadata analysis as methods for source attribution and authenticity verification also presents limitations. Advanced deepfake algorithms can strip or alter metadata, and digital watermarks can be obscured or removed, further complicating the task of tracing the origin of deepfake content.
In addition to visual and metadata-based approaches, behavioral analysis has emerged as a method for detecting deepfakes. This involves examining the subtleties in human behavior, speech patterns, and movements that may be difficult for AI algorithms to replicate accurately. Despite the promise of this approach, it requires extensive data and sophisticated analysis tools, which may not be practical or accessible for all applications.
Moreover, the development of deep learning-based detection methods represents a significant advancement in the field. These methods train models on vast datasets of authentic and deepfake content, aiming to learn distinguishing features that can accurately classify media. While promising, the effectiveness of these models is contingent upon the diversity and quality of the training data, and they may struggle to generalize to new or unseen types of deepfake content.
In light of the above discussion, there exists an urgent need for solutions that overcome the problems associated with conventional systems and techniques for identifying and determining the origin of deepfake content. Such solutions must address the evolving nature of deepfake technology, the limitations of current detection and attribution methods, and the critical need for robust, adaptable, and accessible tools capable of safeguarding the integrity of digital content.

Summary
The following presents a simplified summary of various aspects of this disclosure in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements nor delineate the scope of such aspects. Its purpose is to present some concepts of this disclosure in a simplified form as a prelude to the more detailed description that is presented later.
The following paragraphs provide additional support for the claims of the subject application.
In an increasingly digital world, the proliferation of deepfake content has emerged as a significant challenge, compromising the integrity of digital media and misleading public perception. To combat this, a novel system has been designed for attributing and identifying the source of deepfake content in digital media. This system, comprising several interconnected modules, offers a comprehensive solution to the deepfake dilemma. At its core, the system includes a preprocessing module, an encoding module, a deepfake DNA generation module, a comparison module, and a digital footprint analysis module. Each component plays a crucial role in dissecting and understanding the origins and characteristics of deepfake content, thus providing a robust mechanism for its identification and mitigation.
In an embodiment, the preprocessing module is adept at receiving digital content suspected of being a deepfake and standardizing it for further analysis. This includes noise removal, formatting, and even adjustments to brightness levels, resizing, and cropping to ensure uniformity across the board. Such meticulous preprocessing is critical for the accurate analysis and attribution of deepfake content, setting the stage for the subsequent encoding process.
In an embodiment, the encoding module takes center stage by encoding the preprocessed digital content into a sequence of digital markers. This sequence acts as a unique genetic code for the digital content, encapsulating its essence in a form that is both unique and secure. The encoding process employs a proprietary algorithm, safeguarding the encoded sequence's uniqueness and preventing unauthorized replication or tampering.
In an embodiment, following the encoding process, the deepfake DNA generation module generates what is termed as the deepfake DNA. This digital fingerprint contains vital information about the digital content's origin and characteristics, essentially encapsulating its entire digital lineage. This module iterates the generation process as necessary to ensure the completeness and accuracy of the deepfake DNA, a testament to the system's commitment to precision.
In an embodiment, the comparison module comes into play, comparing the complete deepfake DNA sequence against a database of known deepfake DNA sequences. This comparison, enhanced by machine learning algorithms, serves to identify the source of the deepfake content accurately. Such technology represents a significant leap forward in the ongoing battle against digital misinformation, enabling the precise attribution of deepfake content to its source.
In an embodiment, the digital footprint analysis module further validates the source identification, offering insights into the deepfake content's characteristics and origins. This involves analyzing patterns in content creation and distribution channels, providing a comprehensive view of the deepfake's digital trajectory. This module's findings are presented in a user-friendly visual format, courtesy of an integrated visualization module, making the complex data accessible and understandable to a broader audience.
In an embodiment, the system's scalability and accessibility are bolstered by its implementation on a cloud computing platform, ensuring that the solution can be deployed widely and effectively across various sectors. This cloud-based approach facilitates the handling of vast amounts of data and complex computational tasks, crucial for the real-time analysis and identification of deepfake content.
In an embodiment, the method for utilizing this comprehensive system involves a series of steps that mirror the functional flow of the system's modules. Starting with preprocessing suspected deepfake content, the method progresses through encoding, deepfake DNA generation, comparison against known sequences, and culminates in a digital footprint analysis. This methodical approach ensures a thorough and accurate attribution of deepfake content, providing a significant advantage in the digital domain's integrity maintenance.
The method for attributing and identifying the source of deepfake content in digital media encapsulates a meticulous process beginning with the preprocessing of suspected content to standardize and clean it for analysis. This is followed by encoding the content into a unique sequence of digital markers, effectively creating a digital fingerprint known as deepfake DNA, which encapsulates the content's genetic code. This DNA is then compared against a database of known sequences using advanced machine learning algorithms to accurately identify the source of the deepfake. Finally, a digital footprint analysis is conducted, providing deeper insights into the origins and characteristics of the content. This comprehensive method, from preprocessing to deep analysis, ensures the precise attribution and understanding of deepfake content, maintaining the integrity of digital media.

Brief Description of the Drawings

The features and advantages of the present disclosure would be more clearly understood from the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates a system (100) designed for attributing and identifying the source of deepfake content in digital media, in accordance with the embodiments of the present disclosure.
FIG. 2 illustrates a method for attributing and identifying the source of deepfake content in digital media utilizing the system (100), in accordance with the embodiments of the present disclosure.
FIG. 3 illustrates a working decision flow diagram for attributing and identifying the source of deepfake content in digital media utilizing the system (100), in accordance with the embodiments of the present disclosure.
FIG. 4 illustrates a flow diagram for attributing and identifying the source of deepfake content in digital media utilizing the system, in accordance with the embodiments of the present disclosure.
Detailed Description
In the following detailed description of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to claim those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof.
The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
Pursuant to the "Detailed Description" section herein, whenever an element is explicitly associated with a specific numeral for the first time, such association shall be deemed consistent and applicable throughout the entirety of the "Detailed Description" section, unless otherwise expressly stated or contradicted by the context.
FIG. 1 illustrates a system (100) designed for attributing and identifying the source of deepfake content in digital media, in accordance with the embodiments of the present disclosure. The system (100) designed for attributing and identifying the source of deepfake content in digital media integrates a preprocessing module (102), which plays a pivotal role in the initial analysis phase. Upon receiving digital content that raises suspicions of being manipulated through deepfake technologies, the preprocessing module (102) embarks on a meticulous process of standardizing the content. This standardization process involves adjusting various aspects of the digital content to ensure uniformity across different formats and sources. Furthermore, the module is equipped to remove any form of noise—unwanted or irrelevant data that could potentially obscure the analysis. Noise removal is essential for clarifying the content, enhancing the accuracy of detection in later stages. Additionally, the preprocessing module (102) formats the digital content, converting it into a suitable format that aligns with the requirements of subsequent analytical processes. The formatting task includes adjusting the resolution, aspect ratio, and other technical specifications to match the input requirements of the encoding module (104). By executing these tasks, the preprocessing module (102) ensures that the digital content is optimally prepared for the intricate process of deepfake identification, thereby setting a solid foundation for the subsequent modules in the system to perform their functions effectively.
Following the initial preprocessing phase, the encoding module (104) assumes responsibility for transforming the prepared digital content into a sequence of digital markers. This intricate process involves the application of advanced algorithms that dissect the content into its constituent elements, encoding these elements into a digital sequence that reflects the unique characteristics of the content. The sequence generated through this process serves as a distinct genetic code for the digital content, akin to a biological DNA sequence that uniquely identifies an organism. This unique encoding is pivotal for the creation of a deepfake DNA, as it encapsulates the essence of the digital content in a form that is both identifiable and analyzable. The encoding module (104) thus bridges the gap between the raw, preprocessed content and its representation as a sequence of digital markers, facilitating a nuanced analysis that underpins the identification of deepfake content. The success of this module in accurately encoding the content directly influences the efficacy of the deepfake DNA generation module (106) in constructing a digital fingerprint that is both representative and discriminative.
The deepfake DNA generation module (106) further advances the system's capability to identify and attribute the source of deepfake content. Leveraging the encoded sequence provided by the encoding module (104), this module synthesizes a comprehensive digital fingerprint known as the deepfake DNA. This synthesis involves a complex process of aggregating the encoded markers into a cohesive sequence that embodies the digital content's unique attributes. The resultant deepfake DNA is rich in information, detailing the origin, methodology, and characteristics inherent in the digital content. This digital fingerprint is instrumental in distinguishing between genuine and manipulated content, offering a robust tool for the forensic analysis of digital media. By generating a deepfake DNA, the module not only facilitates the identification of the content's source but also contributes to a broader understanding of the techniques and technologies employed in the creation of deepfake content. This understanding is critical in the development of countermeasures and policies aimed at mitigating the impact of deepfakes on society.
The comparison module (108) plays a crucial role in the system by meticulously comparing the deepfake DNA sequence against a comprehensive database of known deepfake DNA sequences. This database serves as a repository of digital fingerprints associated with previously identified deepfake content, encompassing a wide array of origins and characteristics. The comparison process entails a detailed analysis of the similarities and differences between the newly generated deepfake DNA and the sequences stored in the database. Through this analytical process, the comparison module (108) is able to pinpoint specific markers or sequences that match or closely resemble those of known deepfakes. Such matches provide conclusive evidence regarding the source of the deepfake content, enabling the system to attribute the content to particular creators or methods of creation. The ability of the comparison module (108) to accurately match the deepfake DNA with existing sequences is pivotal in the fight against digital content manipulation, offering a means to trace and understand the proliferation of deepfakes across digital media platforms.
The digital footprint analysis module (110) culminates the system's efforts in attributing and identifying the source of deepfake content by conducting an exhaustive analysis of the digital footprint associated with the content. This analysis delves into the metadata, distribution patterns, and other digital traces left by the content as it traverses through various digital channels. By examining these elements, the digital footprint analysis module (110) provides insights into the methodologies and technologies used in the creation of the deepfake content. This information is invaluable for validating the source identification made by the comparison module (108) and for understanding the broader context in which the deepfake was created and disseminated. The insights gained from the digital footprint analysis not only augment the system's ability to combat the spread of deepfake content but also contribute to the development of more sophisticated detection and attribution techniques. Through this comprehensive approach, the digital footprint analysis module (110) enhances the overall efficacy of the system in addressing the challenges posed by deepfake content in digital media.
In an embodiment, the preprocessing module (102) of the system (100) is further configured to adjust brightness levels, resize, and crop the digital content to ensure uniformity in the analysis process. This enhancement allows for the digital content to undergo a thorough standardization process, which is critical for maintaining consistency across diverse content types and sources. By adjusting the brightness levels, the module ensures that the luminosity of the digital content is optimal for analysis, thereby reducing the impact of variable lighting conditions on the identification process. Resizing and cropping functionalities are integrated to modify the dimensions of the digital content, aligning it with predefined criteria essential for subsequent analytical stages. These functionalities facilitate the removal of extraneous parts of the content, focusing the analysis on relevant sections. The adjustments made by the preprocessing module (102) are instrumental in creating a standardized dataset of digital content, which significantly enhances the system's ability to accurately process and analyze suspected deepfake content. By ensuring uniformity in the preprocessing phase, the system (100) enhances the reliability of the deepfake identification process, laying a strong foundation for the encoding and analysis that follow.
In another embodiment, the system (100) encompasses a database module containing a plurality of known deepfake DNA sequences against which the deepfake DNA generation module (106) compares the generated deepfake DNA sequence. This database module serves as a comprehensive repository of digital fingerprints associated with previously identified deepfake content. The integration of the database module into the system (100) facilitates a dynamic comparison process, wherein the deepfake DNA generated from the analyzed content is meticulously matched against the stored sequences. The presence of such a vast collection of known deepfake DNA sequences enhances the system's capability to accurately identify the source of the deepfake content by enabling precise matching based on unique digital markers. This comparison process is crucial for attributing the deepfake content to specific origins, thereby aiding in the efforts to combat the spread of manipulated digital media. The database module significantly contributes to the system's overall effectiveness by providing a robust framework for the verification and identification of deepfake content.
In a further embodiment, the encoding module (104) of the system (100) employs a proprietary algorithm to ensure the uniqueness and security of the encoded sequence of digital markers. The proprietary algorithm is designed to meticulously convert the digital content into a sequence that is both unique to the content and secure from unauthorized access or manipulation. By utilizing such a specialized algorithm, the encoding module (104) generates digital markers that accurately represent the intrinsic characteristics of the digital content, while also safeguarding the encoded sequence against potential security threats. This approach not only enhances the reliability of the deepfake DNA as a tool for content identification but also protects the integrity of the data involved in the analysis process. The proprietary nature of the algorithm ensures that the encoding process remains a closely guarded component of the system (100), thereby preventing the circumvention of the system's security measures and maintaining the efficacy of the deepfake identification process.
In yet another embodiment, the comparison module (108) of the system (100) utilizes machine learning algorithms to enhance the accuracy of source identification. By integrating machine learning algorithms, the comparison module (108) is endowed with the capability to learn from the database of known deepfake DNA sequences, improving its ability to detect subtle patterns and markers indicative of deepfake content. This adaptive learning process enables the module to refine its comparison techniques over time, leading to increasingly accurate identification of deepfake sources. The use of machine learning algorithms represents a significant advancement in the system's analytical capabilities, allowing for a dynamic and evolving approach to the identification of manipulated digital content. Through the application of these advanced algorithms, the system (100) becomes more adept at distinguishing between genuine and deepfake content, thereby bolstering the efforts to mitigate the impact of digital content manipulation.
In an additional embodiment, the system (100) further comprises a visualization module configured to present the identified source and insights from the digital footprint analysis module (110) in a user-friendly visual format. This visualization module is designed to transform complex data and analysis results into intuitive, visual representations that facilitate easier comprehension and interpretation of the information. By employing graphical representations, charts, and other visual aids, the module makes the insights derived from the digital footprint analysis readily accessible to users, regardless of their technical expertise. The visualization module enhances the user experience by providing a clear and concise overview of the analysis outcomes, including the source of the deepfake content and the patterns in content creation and distribution channels. The inclusion of such a module significantly contributes to the system's utility, enabling stakeholders to make informed decisions based on the insights provided by the system (100).
In an embodiment, the digital footprint analysis module (110) of the system (100) is further configured to analyze patterns in content creation and distribution channels associated with the deepfake content. This configuration allows the module to delve into the intricacies of how and where the deepfake content is produced and disseminated across digital platforms. By examining these patterns, the module identifies commonalities and deviations in the creation and distribution methodologies, offering valuable insights into the operations of entities involved in the production of deepfake content. Such analysis is crucial for understanding the ecosystem within which deepfake content proliferates, providing key information that aids in the development of strategies to counteract the spread of manipulated digital media. The enhanced capabilities of the digital footprint analysis module (110) ensure a comprehensive examination of the deepfake content's lifecycle, from creation to distribution, augmenting the system's effectiveness in addressing the challenges posed by digital content manipulation.
In a further embodiment, the deepfake DNA generation module (106) of the system (100) is configured to iterate the generation process until a complete and accurate deepfake DNA sequence is achieved. This iterative process ensures that the generated deepfake DNA fully encapsulates the unique characteristics of the digital content, refining the digital fingerprint until it represents a precise match to the content's intrinsic properties. Through repeated iterations, the module optimizes the deepfake DNA sequence, enhancing the fidelity of the digital fingerprint and its utility as a tool for identifying and attributing deepfake content. This commitment to accuracy and completeness in the generation of deepfake DNA underlines the system's dedication to providing reliable and robust solutions for combating the spread of manipulated digital media.
In an embodiment, the system (100) is implemented on a cloud computing platform to ensure scalability and accessibility. The adoption of a cloud-based infrastructure allows for the flexible allocation of resources, accommodating varying levels of demand and ensuring that the system remains responsive and efficient under diverse operating conditions. Furthermore, cloud implementation enhances the accessibility of the system, enabling users to engage with the system from any location with internet connectivity. This global accessibility is crucial for the widespread adoption and effectiveness of the system in identifying and combating deepfake content. The cloud-based approach also facilitates the integration of updates and improvements to the system, ensuring that it remains at the forefront of technology in the ongoing fight against digital content manipulation.
FIG. 2 illustrates a method for attributing and identifying the source of deepfake content in digital media utilizing the system (100), in accordance with the embodiments of the present disclosure. At step (202) the method (200) begins with the preprocessing of digital content suspected of being a deepfake, utilizing the preprocessing module (102). This step involves standardizing the content to ensure consistency, removing noise to clarify the data, and formatting it appropriately for analysis. At step (204) the preprocessed digital content is encoded into a sequence of digital markers by the encoding module (104). This step transforms the content into a unique genetic code, representing the intrinsic characteristics of the digital content in a structured format. At step (206) the deepfake DNA generation module (106) generates a deepfake DNA from the encoded sequence. This deepfake DNA acts as a digital fingerprint, encapsulating the unique properties and origins of the digital content for identification purposes. AT step (208) the method then involves comparing the deepfake DNA against a database of known deepfake DNA sequences using the comparison module (108). This comparison aims to match the digital fingerprint with existing sequences to accurately identify the source of the deepfake content. At step (210) a digital footprint analysis is conducted by the digital footprint analysis module (110). This step validates the source identification and provides deeper insights into the characteristics and origins of the deepfake content, completing the attribution process.
FIG. 3 illustrates a working decision flow diagram for attributing and identifying the source of deepfake content in digital media utilizing the system (100), in accordance with the embodiments of the present disclosure. Initiation of the method begins with preprocessing, where the digital content suspected of being a deepfake is standardized, noise is removed, and the content is formatted for analysis. Subsequent to preprocessing, the content undergoes sequence encoding, creating a unique genetic code akin to a deepfake's genome. The deepfake DNA generation module then constructs a digital fingerprint of the content, iterating this process until the DNA sequence is deemed complete. Upon completion, the generated deepfake DNA is compared against a database containing known deepfake DNA sequences, facilitating the identification of the source of the deepfake. This comparison stage is critical, as it ensures the accuracy of the match between the newly generated deepfake DNA and the existing database entries. Once a match is established, source identification is affirmed, and an enhancement of the method is achieved through digital footprint analysis, which examines the creation and distribution patterns of the content. The culmination of this process results in an output that is visualized, providing a user-friendly representation of the identified source and the insights gained from the analysis, effectively concluding the attribution process.
FIG. 4 illustrates a flow diagram for attributing and identifying the source of deepfake content in digital media utilizing the system (100), in accordance with the embodiments of the present disclosure. Post preprocessing, the content is subjected to genome sequence encoding, where it is transformed into a sequence of digital markers that encapsulate the content's unique attributes. This sequence is then used in the deepfake DNA generation phase, creating a digital fingerprint of the content. DNA matching follows, where the newly created deepfake DNA is matched against existing sequences to identify potential similarities. A database comparison is conducted, cross-referencing the deepfake DNA with known sequences to pinpoint the source. Once a source is identified, the process extends to footprint analysis enhancement, scrutinizing the distribution and creation patterns associated with the deepfake content. This comprehensive analysis yields valuable insights, which are then presented in a visual format during the output and visualization stage. The flow diagram captures the sequential and interdependent nature of each step, emphasizing the system's integrated approach to combating digital content manipulation.
Example embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatuses. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including hardware, software, firmware, and a combination thereof. For example, in one embodiment, each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations can be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
Throughout the present disclosure, the term ‘processing means’ or ‘microprocessor’ or ‘processor’ or ‘processors’ includes, but is not limited to, a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
The term “non-transitory storage device” or “storage” or “memory,” as used herein relates to a random access memory, read only memory and variants thereof, in which a computer can store data or software for any duration.
Operations in accordance with a variety of aspects of the disclosure is described above would not have to be performed in the precise order described. Rather, various steps can be handled in reverse order or simultaneously or not at all.
While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.

Claims

I/We claims:

A system (100) for attributing and identifying the source of deepfake content in digital media, comprising:
a preprocessing module (102) configured to receive digital content suspected of being a deepfake and to standardize, remove noise, and format said content for analysis;
an encoding module (104) configured to encode the preprocessed digital content into a sequence of digital markers, wherein said sequence represents a unique genetic code of said digital content;
a deepfake DNA generation module (106) configured to generate a deepfake DNA based on the encoded sequence, wherein said deepfake DNA serves as a digital fingerprint of said digital content, comprising information about the origin and characteristics of said digital content;
a comparison module (108) configured to compare the complete deepfake DNA sequence against a database of known deepfake DNA sequences to identify the source of the deepfake content; and
a digital footprint analysis module (110) configured to conduct a digital footprint analysis to validate the source identification and provide insights into the characteristics and origins of the deepfake content.
The system (100) of claim 1, wherein the preprocessing module (102) is further configured to adjust brightness levels, resize, and crop the digital content to ensure uniformity in the analysis process.
The system (100) of claim 1, further comprising a database module containing a plurality of known deepfake DNA sequences against which the deepfake DNA generation module (106) compares the generated deepfake DNA sequence.
The system (100) of claim 1, wherein the encoding module (104) employs a proprietary algorithm to ensure the uniqueness and security of the encoded sequence of digital markers.
The system (100) of claim 1, wherein the comparison module (108) utilizes machine learning algorithms to enhance the accuracy of source identification.
The system (100) of claim 1, further comprising a visualization module configured to present the identified source and insights from the digital footprint analysis module (110) in a user-friendly visual format.
The system (100) of claim 1, wherein the digital footprint analysis module (110) is further configured to analyze patterns in content creation and distribution channels associated with the deepfake content.
The system (100) of claim 1, wherein the deepfake DNA generation module (106) is configured to iterate the generation process until a complete and accurate deepfake DNA sequence is achieved.
The system (100) of claim 1, wherein the system is implemented on a cloud computing platform to ensure scalability and accessibility.
A method for attributing and identifying the source of deepfake content in digital media utilizing the system (100), comprising:
preprocessing digital content suspected of being a deepfake to standardize, remove noise, and format said content for analysis using the preprocessing module (102);
encoding the preprocessed digital content into a sequence of digital markers to represent a unique genetic code of said digital content using the encoding module (104);
generating a deepfake DNA from the encoded sequence, wherein the deepfake DNA serves as a digital fingerprint of said digital content, using the deepfake DNA generation module (106);
comparing the deepfake DNA against a database of known deepfake DNA sequences to identify the source of the deepfake content using the comparison module (108); and
conducting a digital footprint analysis to further validate the source identification and provide insights into the characteristics and origins of the deepfake content using the digital footprint analysis module (110).

DIGITAL MEDIA AUTHENTICITY VERIFICATION SYSTEM UTILIZING GENETIC ENCODING TECHNIQUES

The present disclosure provides a system (100) for attributing and identifying the source of deepfake content in digital media, comprising: a preprocessing module (102) configured to receive digital content suspected of being a deepfake and to standardize, remove noise, and format said content for analysis; an encoding module (104) configured to encode the preprocessed digital content into a sequence of digital markers, wherein said sequence represents a unique genetic code of said digital content; a deepfake DNA generation module (106) configured to generate a deepfake DNA based on the encoded sequence, wherein said deepfake DNA serves as a digital fingerprint of said digital content, comprising information about the origin and characteristics of said digital content; a comparison module (108) configured to compare the complete deepfake DNA sequence against a database of known deepfake DNA sequences to identify the source of the deepfake content; and a digital footprint analysis module (110) configured to conduct a digital footprint analysis to validate the source identification and provide insights into the characteristics and origins of the deepfake content.
Fig. 1

Drawings
/
FIG. 1
/
FIG. 2
/
FIG. 3
/
FIG. 4

, Claims:I/We claims:

A system (100) for attributing and identifying the source of deepfake content in digital media, comprising:
a preprocessing module (102) configured to receive digital content suspected of being a deepfake and to standardize, remove noise, and format said content for analysis;
an encoding module (104) configured to encode the preprocessed digital content into a sequence of digital markers, wherein said sequence represents a unique genetic code of said digital content;
a deepfake DNA generation module (106) configured to generate a deepfake DNA based on the encoded sequence, wherein said deepfake DNA serves as a digital fingerprint of said digital content, comprising information about the origin and characteristics of said digital content;
a comparison module (108) configured to compare the complete deepfake DNA sequence against a database of known deepfake DNA sequences to identify the source of the deepfake content; and
a digital footprint analysis module (110) configured to conduct a digital footprint analysis to validate the source identification and provide insights into the characteristics and origins of the deepfake content.
The system (100) of claim 1, wherein the preprocessing module (102) is further configured to adjust brightness levels, resize, and crop the digital content to ensure uniformity in the analysis process.
The system (100) of claim 1, further comprising a database module containing a plurality of known deepfake DNA sequences against which the deepfake DNA generation module (106) compares the generated deepfake DNA sequence.
The system (100) of claim 1, wherein the encoding module (104) employs a proprietary algorithm to ensure the uniqueness and security of the encoded sequence of digital markers.
The system (100) of claim 1, wherein the comparison module (108) utilizes machine learning algorithms to enhance the accuracy of source identification.
The system (100) of claim 1, further comprising a visualization module configured to present the identified source and insights from the digital footprint analysis module (110) in a user-friendly visual format.
The system (100) of claim 1, wherein the digital footprint analysis module (110) is further configured to analyze patterns in content creation and distribution channels associated with the deepfake content.
The system (100) of claim 1, wherein the deepfake DNA generation module (106) is configured to iterate the generation process until a complete and accurate deepfake DNA sequence is achieved.
The system (100) of claim 1, wherein the system is implemented on a cloud computing platform to ensure scalability and accessibility.
A method for attributing and identifying the source of deepfake content in digital media utilizing the system (100), comprising:
preprocessing digital content suspected of being a deepfake to standardize, remove noise, and format said content for analysis using the preprocessing module (102);
encoding the preprocessed digital content into a sequence of digital markers to represent a unique genetic code of said digital content using the encoding module (104);
generating a deepfake DNA from the encoded sequence, wherein the deepfake DNA serves as a digital fingerprint of said digital content, using the deepfake DNA generation module (106);
comparing the deepfake DNA against a database of known deepfake DNA sequences to identify the source of the deepfake content using the comparison module (108); and
conducting a digital footprint analysis to further validate the source identification and provide insights into the characteristics and origins of the deepfake content using the digital footprint analysis module (110).

DIGITAL MEDIA AUTHENTICITY VERIFICATION SYSTEM UTILIZING GENETIC ENCODING TECHNIQUES

Documents

Application Documents

# Name Date
1 202421033096-OTHERS [26-04-2024(online)].pdf 2024-04-26
2 202421033096-FORM FOR SMALL ENTITY(FORM-28) [26-04-2024(online)].pdf 2024-04-26
3 202421033096-FORM 1 [26-04-2024(online)].pdf 2024-04-26
4 202421033096-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-04-2024(online)].pdf 2024-04-26
5 202421033096-EDUCATIONAL INSTITUTION(S) [26-04-2024(online)].pdf 2024-04-26
6 202421033096-DRAWINGS [26-04-2024(online)].pdf 2024-04-26
7 202421033096-DECLARATION OF INVENTORSHIP (FORM 5) [26-04-2024(online)].pdf 2024-04-26
8 202421033096-COMPLETE SPECIFICATION [26-04-2024(online)].pdf 2024-04-26
9 202421033096-FORM-9 [07-05-2024(online)].pdf 2024-05-07
10 202421033096-FORM 18 [08-05-2024(online)].pdf 2024-05-08
11 202421033096-FORM-26 [12-05-2024(online)].pdf 2024-05-12
12 202421033096-FORM 3 [13-06-2024(online)].pdf 2024-06-13
13 202421033096-RELEVANT DOCUMENTS [01-10-2024(online)].pdf 2024-10-01
14 202421033096-POA [01-10-2024(online)].pdf 2024-10-01
15 202421033096-FORM 13 [01-10-2024(online)].pdf 2024-10-01
16 202421033096-FER.pdf 2025-07-23
17 202421033096-FORM-8 [03-09-2025(online)].pdf 2025-09-03
18 202421033096-FER_SER_REPLY [03-09-2025(online)].pdf 2025-09-03
19 202421033096-DRAWING [03-09-2025(online)].pdf 2025-09-03
20 202421033096-CORRESPONDENCE [03-09-2025(online)].pdf 2025-09-03
21 202421033096-COMPLETE SPECIFICATION [03-09-2025(online)].pdf 2025-09-03
22 202421033096-CLAIMS [03-09-2025(online)].pdf 2025-09-03
23 202421033096-ABSTRACT [03-09-2025(online)].pdf 2025-09-03

Search Strategy

1 3096E_10-07-2024.pdf