Sign In to Follow Application
View All Documents & Correspondence

Deep Fake And Content Verification System For Digital Threat Mitigation

Abstract: A deep fake and content verification system for digital threat mitigation, comprises of an inverted U-shaped frame 101 configured to be attached with front portion of an electronic gadget, a laser sensor installed on frame 101 for detecting dimensions of electronic gadget, a microphone 104 provided on frame 101 to give voice input regarding scanning and analyzing of digital content displayed on screen of electronic gadget, an imaging unit 107 synced with a CNN and LSTM modules installed on the sheet 106 for detecting altered facial features, mismatched body movements, and inconsistencies in video frames, a cross-spectral analysis protocol analyzes merged audio clips and identifies spectral signature inconsistencies, an OCR sensor 109 embedded with the sheet 106 scans and analyzes textual content displayed on the screen, and an IR sensor configured on the sheet 106 to emit infrared light onto a digital document or signature displayed on the screen.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
25 November 2024
Publication Number
51/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Marwadi University
Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.

Inventors

1. Aditya Singh
Department of Computer Engineering – Artificial Intelligence, Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.
2. Yash Singh
Department of Computer Engineering, Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.
3. Krutika Panigrahi
Department of Computer Engineering, Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.
4. Subham Patra
Department of Computer Engineering, Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.
5. Dr. Madhu Shukla
Professor, Head of Department, Department of Computer Science Engineering - Artificial Intelligence, Machine Learning, Data Science, Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.
6. Simrin Fathima Syed
Assistant Professor, Department of Computer Science Engineering - Artificial Intelligence, Machine Learning, Data Science, Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.
7. Vipul Ladva
Assistant Professor, Department of Computer Science Engineering - Artificial Intelligence, Machine Learning, Data Science, Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.
8. Akshay Ranpariya
Assistant Professor, Department of Computer Science Engineering - Artificial Intelligence, Machine Learning, Data Science, Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.
9. Neel Dholakia
Assistant Professor, Department of Computer Science Engineering - Artificial Intelligence, Machine Learning, Data Science, Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.
10. Jay Navinbhai Maheshwari
Department of Computer Engineering – Artificial Intelligence, Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.

Specification

Description:FIELD OF THE INVENTION

[0001] The present invention relates to a deep fake and content verification system for digital threat mitigation that is capable of providing a means to verify deep and fake content of video that is being played in an electronic gadget for digital threat mitigation by detecting altered facial features, mismatched body movements, and inconsistencies in video frames in order to aware the user about authenticity of the content for controlling digital threats.

BACKGROUND OF THE INVENTION

[0002] Deep and fake content nowadays cause widespread harm across multiple sectors, including the spread of misinformation, erosion of trust, and reputational damage. These technologies allow for the creation of highly convincing yet entirely fabricated media, which mislead the public, manipulate opinions, and distort facts, particularly in political, social, and news-related contexts. This misinformation undermines the credibility of legitimate sources and aid to away public perception, leading to confusion and division. Additionally, deep and fake contents are often used to target individuals, damaging their reputations by impersonating them or creating false narratives. As these technologies evolve, they not only pose a threat to personal privacy and security but also contribute to a broader societal challenge, where people begin to distrust all forms of digital content.

[0003] Traditionally, the user uses tools for verifying the deep fake and content for digital threat mitigation includes manual inspection, where users visually assess videos or images for inconsistencies such as unnatural facial movements or mismatched audio, image and video analysis software, which examines metadata, pixel-level anomalies, or compression artifacts to detect tampering, reverse image or video search engines to trace the origin of content and verify its authenticity. Further, audio analysis tools to identify irregularities in voice recordings, and machine learning-based detection systems that recognize subtle artifacts indicative of deep fakes. These conventional methods, while useful, often require significant time and expertise, prompting the need for more advanced, automated, or integrated solutions to enhance accuracy and efficiency in detecting manipulated digital content.

[0004] US20220129664A1 discloses a deep fake video detection system, including an input data detection module of a video recognition unit for setting a target video; a data pre-processing unit for detecting eye features from the face in the target video; a feature extraction module for extracting eye features and inputting the eye features to a long-term recurrent convolutional neural network (LRCN); and then using a sequence of long-term and short-term memory (LSTM) of a learning module; performing sequence learning; using a state prediction module to predict the output of each neuron, and then using a long and short-term memory model to output the quantized eye state, then connecting to a state quantification module, and comparing the original stored data from the normal video and the quantified eye state information of the target video, and outputting the recognition result by an output data recognition module.

[0005] US20210142065A1 discloses a system for detecting synthetic videos may include a server, a plurality of weak classifiers, and a strong classifier. The server may be configured to receive a prediction result from each of a plurality of weak classifiers; and send the prediction results from each of the plurality of weak classifiers to a strong classifier. The weak classifiers may be trained on real videos and known synthetic videos to analyse a distinct characteristic of a video file; detect irregularities of the distinct characteristic; generate a prediction result associated with the distinct characteristic, the prediction result being a prediction on whether the video file is synthetic; and output the prediction result to the server. The strong classifier may be trained to receive the prediction results of the plurality of weak classifiers from the server; analyse the prediction results; and determine if the video file is synthetic based on the prediction results.

[0006] Conventionally, many devices are disclosed in prior art that provides a way to verify the deep and fake content for digital threat mitigation by utilizing techniques such as image and video analysis, audio verification, machine learning protocols, and reverse content verification to detect inconsistencies and identify manipulated media, but requires significant computational resources, time, and expertise, highlighting the need for more efficient way to verify the content.

[0007] In order to overcome the aforementioned drawbacks, there exists a need in the art to develop a system that requires to be capable of verifying deep and fake content of the video of the electronic gadget for digital threat mitigation by utilizing techniques to analyze altered facial features, mismatched body movements, and inconsistencies in video frames and accordingly provides user with detailed information about audio manipulation and plagiarism or false claims.

OBJECTS OF THE INVENTION

[0008] The principal object of the present invention is to overcome the disadvantages of the prior art.

[0009] An object of the present invention is to develop a system that is capable of verifying deep and fake content of video that is being played in an electronic gadget for digital threat mitigation by utilizing techniques to detect altered facial features, mismatched body movements, and inconsistencies in video frames in order to aware the user about the authenticity of the content.

[0010] Another object of the present invention is to develop a system that is capable of analyzing merged audio clips, including speech mixed with background noise, and identifies spectral signature inconsistencies to detect manipulation or tampering in audio of the content.

[0011] Another object of the present invention is to develop a system that is capable of performing an analysis on frequency profile of merged audio clips to detect unnatural transitions between components of clip in order to aid a means to provide audio detailed information about audio manipulation to the user.

[0012] Yet another object of the present invention is to develop a system that is capable of analyzing textual content displayed on the screen of the gadget to assess authenticity of text to detect plagiarism or false claims in the video.

[0013] The foregoing and other objects, features, and advantages of the present invention will become readily apparent upon further review of the following detailed description of the preferred embodiment as illustrated in the accompanying drawings.

SUMMARY OF THE INVENTION

[0014] The present invention relates to a deep fake and content verification system for digital threat mitigation that is capable of verifying deep and fake content of a video shows in an electronic gadget by detecting altered facial features, mismatched body movements, and inconsistencies in video frames, analyzing merged audio clips, performing an analysis on frequency profile of merged audio clips, and textual content displayed on the screen of the gadget in order to analyze the authenticity level of the content.

[0015] According to an embodiment of the present invention, a deep fake and content verification system for digital threat mitigation, comprises of an inverted U-shaped frame configured to be attached with a front portion of an electronic gadget, a laser sensor installed on the frame for detecting dimensions of the electronic gadget, a microcontroller linked with the laser sensor based on the detected dimensions regulates actuation of a drawer mechanism integrated with the frame to modulate dimension of the frame, multiple suction units arranged on bottom portion of the frame to secure the frame over the electronic gadget, a microphone provided on the frame for receiving voice commands of a user regarding scanning and analyzing of digital content displayed on screen of the electronic gadget, a motorized roller positioned along an inner periphery of top edge of the frame to roll out a transparent sheet coiled around the roller and align the sheet with screen of the electronic gadget, a motorized slider and clipper mechanism is attached along inner periphery of the frame for securing the transparent sheet over the screen, an artificial intelligence-based imaging unit synced with a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) modules installed on the sheet for detecting altered facial features, mismatched body movements, and inconsistencies in video frames, a cross-spectral analysis protocol integrated with the microcontroller that analyzes merged audio clips and identifies spectral signature inconsistencies to detect manipulation or tampering in audio.

[0016] According to another embodiment of the present invention, the proposed system further comprises of a speaker mounted on the frame provides user with detailed information about audio manipulation, an optical character recognition (OCR) sensor embedded with the sheet scans and analyzes textual content displayed on the screen to assess authenticity of text, an infrared (IR) sensor configured on the sheet to emit infrared light onto a digital document or signature displayed on the screen, a display panel attached to top portion of frame for displaying real-time alerts, information about detected manipulations, and relevant verified content, and an LED (Light Emitting Diode) light embedded on the frame for providing visual alert when false or misleading information is detected.

[0017] While the invention has been described and shown with particular reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS

[0018] These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
Figure 1 illustrates an isometric view of a deep fake and content verification system for digital threat mitigation; and
Figure 2 illustrates a flowchart depicting the working methodology of the proposed system.

DETAILED DESCRIPTION OF THE INVENTION

[0019] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention as defined in the claims.

[0020] In any embodiment described herein, the open-ended terms "comprising," "comprises,” and the like (which are synonymous with "including," "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of," consists essentially of," and the like or the respective closed phrases "consisting of," "consists of, the like.

[0021] As used herein, the singular forms “a,” “an,” and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.

[0022] The present invention relates to a deep fake and content verification system for digital threat mitigation that is capable of verifying deep fake and content of a video that is being played in an electronic gadget by analyzing inconsistency in video, merged audio clips, performing an analysis on frequency profile of merged audio clips, and textual content displayed on the screen of the gadget for providing information about fake content to control digital threats.

[0023] Referring to Figure 1 and 2, an isometric view of a deep fake and content verification system for digital threat mitigation and a flowchart depicting the working methodology of the proposed system are illustrated, respectively, comprising an inverted U-shaped frame 101 integrated with a drawer mechanism 102, multiple suction units 103 arranged on bottom portion of the frame 101, a microphone 104 provided on the frame 101, a motorized roller 105 positioned along an inner periphery of top edge of the frame 101, a transparent sheet 106 coiled around the roller 105, an artificial intelligence-based imaging unit 107 installed on the sheet 106, a speaker 108 mounted on the frame 101, an optical character recognition (OCR) sensor 109 embedded with the sheet 106, a display panel 110 attached to top portion of frame 101, and an LED (Light Emitting Diode) light 111 embedded on the frame 101, and a motorized slider 112 and clipper mechanism 113 attached along inner periphery of the frame 101.

[0024] The proposed system comprises of an inverted U-shaped frame 101 encased with various components associated with the system arrange in sequential manner that aids in verifying deep fake and content for digital threat mitigation. Herein, the frame 101 is utilize to attach with front portion of an electronic gadget by user and then activates the system by pressing a switch button integrated with the frame 101. The button mentioned herein is a type of a switch that is internally connected with the system via multiple circuits that upon pressing by the user, the circuits get closed and starts conducting electricity that tends to activate the system and vice versa.

[0025] After activation of the system by the user, a microcontroller associated with the system generates a commands to operate the system accordingly. After activating of the system, a laser sensor integrated with the frame 101 detects dimensions of the electronic gadget that is being attached with the frame 101. The laser sensor sends laser beams at two points of the gadget to form a triangle between the gadget and the point of the laser sensor from where lasers are emitted. The beams are bounced back towards the sensor and are sensed by the sensor with the angle formed between the emitting point and the surface at which laser beam are impact. The laser sensor, after detecting the required data of area of the gadget sends the data to the microcontroller. After receiving data, the microcontroller analyzes the data to detect the dimensions of the gadget.

[0026] Based on detecting the dimensions of the gadget, the microcontroller actuates a drawer mechanism 102 integrated with the frame 101 to alter dimension of the frame 101 as per detected dimensions of the gadget. The drawer mechanism 102 comprises of of a carriage assembly and a DC (direct current) motor that works in collaboration to extend and retract the frame 101. The carriage assembly fitted with two rails that are used for sliding the block up and down. The block opening located at the end of the rail and have two clips that are used to secure the ring with the frame 101. To extend the drawer, the drawer is pushed to open and the carriage assembly slide outward. This creates an opening to allow extension and retraction of the frame 101 in accordance with the detected dimension of the gadget for proper attachment of the electronic gadget with the frame 101.

[0027] Simultaneously, the microcontroller actuates multiple suction units 103 assembled on bottom portion of the frame 101 to secure the frame 101 over the electronic gadget. The suction unit 103 mentioned operates by generating a vacuum or negative pressure, creating a strong adhesive force that securely holds the frame 101 over the electronic gadget. The microcontroller controls the suction units 103 by activating small pumps or valves, which regulate the vacuum level to ensure a stable attachment by maintaining the right amount of suction force with the gadget.

[0028] Upon attaching of the frame 101 with the gadget, the user accesses a microphone 104 provided on the frame 101 for giving voice commands regarding scanning and analyzing of digital content displayed on screen of the electronic gadget. The microphone 104 receives sound waves generated by energy emitted from the voice command in the form of vibrations. After then, the sound waves are transmitted towards a diaphragm configured with a coil. Upon transmitting the waves within the diaphragm, the diaphragm strikes with the waves due to which the coil starts moving the diaphragm with a back-and-forth movement in presence of magnetic field generated from the coil.

[0029] After that the electric signal is emitted from the coil due to back-and-forth movement of the diaphragm which is further transmitted to the microcontroller linked with the microphone 104 to process the signal to analyze the signal for detecting voice command given by the user. Upon processing the voice commands, the microcontroller actuates a motorized roller 105 attached along an inner periphery of top edge of the frame 101 to roll out a transparent sheet 106 coiled around the roller 105 and align the sheet 106 with screen of the electronic gadget. The roller 105 is coupled with a motor that is activated by the microcontroller to rotate the roller 105 with specified speed in order to roll out the sheet 106 from the roller 105 and align with the screen for scanning and analyzing digital content displayed on the screen.

[0030] Simultaneously, the microcontroller actuates a clipper mechanism 113 attached along inner periphery of the frame 101 via a motorized slider 112 for securing the transparent sheet 106 over the screen. The clipper mechanism 113 is linked with a hinge that is activated by the microcontroller to provide back and forth movement to the mechanism 113 for gripping the sheet 106. After that the microcontroller actuates the slider 112 to translate the sheet 106 to cover the screen of the gadget.

[0031] The slider 112 mentioned herein consists of a rail unit that provides a guided path for linear movement. The rail unit usually includes a pair of parallel rails or tracks, along which the slider 112 moves. The slider carriage, also called a stage or platform equipped with a mechanism to minimize friction and ensure smooth motion. The slider 112 incorporates a motor and a drive mechanism to generate linear motion. The motor is connected to a drive mechanism, such as a belt, lead screw, or ball screw. The drive mechanism converts the rotational motion of the motor into linear motion, propelling the slider carriage along the rail unit to translate the sheet 106 for securing the transparent sheet 106 over the screen in order to overlap the content that is being displayed in the screen with the sheet 106.

[0032] During displaying of the content in the screen, an artificial intelligence-based imaging unit 107 synced with a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) modules installed on the sheet 106 detects altered facial features, mismatched body movements and inconsistencies in video frames. The imaging unit 107 mentioned herein comprises of comprises of a camera and processor that works in collaboration to capture and process the images of surrounding of the screen. The camera firstly captures multiple images of the surrounding, wherein the camera comprises of a body, electronic shutter, lens, lens aperture, image sensor, and imaging processor that works in sequential manner to capture images of the surrounding.

[0033] After capturing of the images by the camera, the shutter is automatically open due to which the reflected beam of light coming from the surrounding due to light is directed towards the lens aperture. After that the reflected light beam passes through the image sensor. The image sensor now analyzes the beam to retrieve signal from the beams which is further calibrate by the sensor to capture images of the surrounding in electronic signal. Upon capturing images, the imaging processor processes the electronic signal into digital image. When the image capturing is done, the processor associated with the imaging unit 107 processes the captured images by using a protocol of artificial intelligence to retrieve data from the captured image in the form of digital signal. The detected data in the form of digital signal is now transmitted to the linked microcontroller based on which the microcontroller acquires the data to detect the video presentation in the screen.

[0034] Simultaneously, the Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) modules detects altered facial features, mismatched body movements featured in the video, and inconsistencies in video frames. The CNN module works by analyzing visual patterns in the video frames to detect altered facial features, identifying key points such as eyes, mouth, and overall facial structure. The CNN module processes spatial data to capture static features, recognizing facial distortions in real-time. The LSTM module, on the other hand, processes the temporal sequence of frames to detect mismatched body movements and inconsistencies over time. By combining both the CNN's spatial analysis and the LSTM's ability to track temporal changes, the modules effectively identify the video is a deep fake.

[0035] Moreover, the microphone 104, herein captures audio during consumption of content, the microphone 104 records audio associated with historical or biographical data and sends audio to a remote server for analysis, comparing audio data with verified historical records to detect false or misleading information. Based on detecting the misleading information, the microcontroller actuates an LED (Light Emitting Diode) light 111 embedded on the frame 101 for providing alerts user via a visual indicator when false or misleading information is detected. The working principle of LED light 111 is based on electroluminescence activated by the microcontroller leads to generate photons in the form of energy by recombined holes and electrons. The recombination process includes jumping of the electrons from conduction bands to valance bands that aids to release energy in the form of thermal lattice vibrations known as photons that aids to glow the light for providing visual alerts for false or misleading information.

[0036] Additionally, a cross-spectral analysis protocol integrated with the microcontroller discloses herein for analyzing merged audio clips of the video including speech mixed with background noise, and identifies spectral signature inconsistencies to determine manipulation or tampering in audio. The cross-spectral analysis protocol works by decomposing the merged audio clips into individual frequency components using techniques such as Fast Fourier Transform (FFT). After the decomposing of the merged audio, the cross-spectral analysis protocol compares the spectral signatures of different segments of the audio, such as speech and background noise, to detect manipulation or tampering in audio.

[0037] Based on detecting the manipulation or tampering in audio, the microcontroller performs an analysis on frequency profile of merged audio clips to determine unnatural transitions between components of clip. After that the microcontroller actuates a speaker 108 assembled on the frame 101 to provide detailed information about audio manipulation. The speaker 108, mentioned herein includes a diaphragm, which is typically made of a lightweight and rigid material like paper, plastic, or metal to vibrate and produce sound waves when electrical signals are fed to it for notifying the user to provide detailed information about audio manipulation.

[0038] After passing of the electrical signal through a voice coil of the speaker 108 suspended within a magnetic gap of the speaker 108, the speaker 108 generates a magnetic field that interacts with the fixed magnetic field produced by a magnet assembly associated with the voice coil. Upon variation in electrical current, the magnetic field produced by the voice coil changes, resulting in the voice coil and attached cone/diaphragm moving back and forth. This movement creates pressure variations in the surrounding air, generating sound waves to generate the audible sound to notify the user about detailed information about audio manipulation.

[0039] Additionally, a display panel 110 is assembled to top portion of frame 101 for displaying real-time alerts, information about detected manipulations, and relevant verified content such as news articles, digital documents, and media. The display panel 110 works by using LCD (liquid crystals) that are manipulated by electric currents to control the passage of light through the display unit. When an electric current is applied, the liquid crystals align in a way that either allows light to pass through or blocks it, creating the images and colors that is being visible in the LCD of the display panel 110 regarding the real-time alerts, information about detected manipulations, and relevant verified content.

[0040] An optical character recognition (OCR) sensor embedded with the sheet 106 scans and analyzes textual content displayed on the screen to retrieve authenticity of text for determining of plagiarism or false claims. The OCR sensor 109 works by using optical imaging technology to capture images of the textual content displayed on the screen. The sensor 109 then processes these images using pattern recognition protocols to identify individual characters and words. The sensor 109 converts the visual data into machine-readable text by analyzing the shapes and structures of the characters, comparing them to a pre-trained database of fonts and symbols from the microcontroller to determine textual content displayed on the screen for retrieving authenticity of text that includes analyzing writing style, word choice, sentence structure, and grammar to detect plagiarism or false claims in the content.

[0041] Based on detecting the plagiarism or false claims in the content, the microcontroller further compares publication dates and authorship of documents or articles from database to determine original source and legitimacy of the content. Herein, the microcontroller is linked to the database stored with verified information that includes signature patterns, images of account holders, authentic digital documents, articles, and worldwide news sources, the database is analyzed to assist in authenticity verification for assistance in authentication verification.

[0042] Herein, an infrared (IR) sensor integrated on the sheet 106 to emit infrared light on a digital document or signature displayed on the screen. The IR sensor works by emitting infrared light onto the digital document or signature displayed on the screen. When the infrared light interacts with the surface, the light reflects back to the sensor, and the sensor measures the intensity and pattern of the reflected light. Variations in the reflection identifies unique characteristics of the document or signature to identify differences in ink composition, detect overwriting or alterations, and provide alerts for potential forgeries in the speaker 108, thereby detecting unauthorized alterations or forgeries.

[0043] A battery (not shown in figure) is associated with the system to offer power to all electrical and electronic components necessary for their correct operation. The battery is linked to the microcontroller and provides (DC) Direct Current to the microcontroller. And then, based on the order of operations, the microcontroller sends that current to those specific electrical or electronic components so they effectively carry out their appropriate functions.

[0044] The present invention works best in following manner, where the inverted U-shaped frame 101 as disclosed in the invention is developed to be attached with the electronic gadget disclosed herein installed with the laser sensor for detecting dimensions of the electronic gadget based on that the microcontroller actuates the drawer mechanism 102 to modulate dimension of the frame 101, followed by actuation of the suction units 103 to secure the frame 101 over the electronic gadget. After that the user accesses the microphone 104 for giving voice commands regarding scanning and analyzing of digital content displayed on screen of the electronic gadget. Upon receiving the user’s commands, the microcontroller actuates the motorized roller 105 to roll out the transparent sheet 106 and align the sheet 106 with screen of the electronic gadget for scanning and analyzing digital content displayed on the screen. Herein, the artificial intelligence-based imaging unit 107 synchronized with a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) modules detects altered facial features, mismatched body movements, and inconsistencies in video frames to determine if a video is a deep fake. Also, the cross-spectral analysis protocol analyzes merged audio clips, including speech mixed with background noise, and identifies spectral signature inconsistencies to detect manipulation or tampering in audio.

[0045] In continuation, based on detecting the manipulation or tampering in audio, the microcontroller performs an analysis on frequency profile of merged audio clips to detect unnatural transitions between components of clip, and the microcontroller via the speaker 108 provides user with detailed information about audio manipulation. Herein, the optical character recognition (OCR) sensor 109 scans and analyzes textual content displayed on the screen to assess authenticity of text to detect plagiarism or false claims and then the microcontroller further compares publication dates and authorship of documents or articles to determine original source and legitimacy of the content. Also, the infrared (IR) sensor emits infrared light onto a digital document or signature displayed on the screen where the IR sensor captures reflections at various wavelengths to identify differences in ink composition, detect overwriting or alterations, and provide alerts for potential forgeries, thereby detecting unauthorized alterations or forgeries.

[0046] Although the field of the invention has been described herein with limited reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as alternate embodiments of the invention, will become apparent to persons skilled in the art upon reference to the description of the invention. , C , Claims:1) A deep fake and content verification system for digital threat mitigation, comprising:

i) an inverted U-shaped frame 101 configured to be attached with a front portion of an electronic gadget, wherein a laser sensor is installed on said frame 101 for detecting dimensions of said electronic gadget;
ii) a microcontroller linked with said laser sensor based on said detected dimensions, regulates actuation of a drawer mechanism 102 integrated with said frame 101 to modulate dimension of said frame 101, followed by actuation of multiple suction units 103 arranged on a bottom portion of said frame 101 to secure said frame 101 over said electronic gadget;
iii) a microphone 104 provided on said frame 101 for receiving voice commands of a user regarding scanning and analyzing of digital content displayed on screen of said electronic gadget, wherein upon receiving said user’s commands, said microcontroller actuates a motorized roller 105 positioned along an inner periphery of a top edge of said frame 101 to roll out a transparent sheet 106 coiled around said roller 105 and align said sheet 106 with screen of said electronic gadget for scanning and analyzing digital content displayed on said screen;
iv) an artificial intelligence-based imaging unit 107 installed on said sheet 106 and synchronized with a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) modules for detecting altered facial features, mismatched body movements, and inconsistencies in video frames to determine if a video is a deep fake;
v) a cross-spectral analysis protocol integrated with said microcontroller that analyzes merged audio clips, including speech mixed with background noise, and identifies spectral signature inconsistencies to detect manipulation or tampering in audio, wherein said microcontroller performs an analysis on frequency profile of merged audio clips to detect unnatural transitions between components of clip, and said microcontroller via a speaker 108 mounted on said frame 101 provides user with detailed information about audio manipulation;
vi) an optical character recognition (OCR) sensor 109 embedded with said sheet 106 scans and analyzes textual content displayed on said screen to assess authenticity of text, including analyzing writing style, word choice, sentence structure, and grammar to detect plagiarism or false claims, wherein said microcontroller further compares publication dates and authorship of documents or articles to determine original source and legitimacy of the content; and
vii) an infrared (IR) sensor configured on said sheet 106 to emit infrared light onto a digital document or signature displayed on said screen, wherein said IR sensor captures reflections at various wavelengths to identify differences in ink composition, detect overwriting or alterations, and provide alerts for potential forgeries, thus detecting unauthorized alterations or forgeries.

2) The system as claimed in claim 1, wherein a motorized slider 112 and clipper mechanism 113 is attached along inner periphery of said that are synchronously actuated by said microcontroller for securing said transparent sheet 106 over said screen.

3) The system as claimed in claim 1, wherein said microphone 104 captures audio during consumption of content, said microphone 104 records audio associated with historical or biographical data and sends audio to a remote server for analysis, comparing audio data with verified historical records to detect false or misleading information.

4) The system as claimed in claim 1, wherein a display panel 110 is attached to top portion of frame 101 for displaying real-time alerts, information about detected manipulations, and relevant verified content such as news articles, digital documents, and media.

5) The system as claimed in claim 1, wherein said microcontroller is linked to a database stored with verified information, including signature patterns, images of account holders, authentic digital documents, articles, and worldwide news sources, said database is analyzed to assist in authenticity verification.

6) The system as claimed in claim 1, wherein said system alerts user via a visual indicator, an LED (Light Emitting Diode) light 111 embedded on said frame 101, when false or misleading information is detected.

Documents

Application Documents

# Name Date
1 202421091871-STATEMENT OF UNDERTAKING (FORM 3) [25-11-2024(online)].pdf 2024-11-25
2 202421091871-REQUEST FOR EXAMINATION (FORM-18) [25-11-2024(online)].pdf 2024-11-25
3 202421091871-REQUEST FOR EARLY PUBLICATION(FORM-9) [25-11-2024(online)].pdf 2024-11-25
4 202421091871-PROOF OF RIGHT [25-11-2024(online)].pdf 2024-11-25
5 202421091871-POWER OF AUTHORITY [25-11-2024(online)].pdf 2024-11-25
6 202421091871-FORM-9 [25-11-2024(online)].pdf 2024-11-25
7 202421091871-FORM FOR SMALL ENTITY(FORM-28) [25-11-2024(online)].pdf 2024-11-25
8 202421091871-FORM 18 [25-11-2024(online)].pdf 2024-11-25
9 202421091871-FORM 1 [25-11-2024(online)].pdf 2024-11-25
10 202421091871-FIGURE OF ABSTRACT [25-11-2024(online)].pdf 2024-11-25
11 202421091871-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [25-11-2024(online)].pdf 2024-11-25
12 202421091871-EVIDENCE FOR REGISTRATION UNDER SSI [25-11-2024(online)].pdf 2024-11-25
13 202421091871-EDUCATIONAL INSTITUTION(S) [25-11-2024(online)].pdf 2024-11-25
14 202421091871-DRAWINGS [25-11-2024(online)].pdf 2024-11-25
15 202421091871-DECLARATION OF INVENTORSHIP (FORM 5) [25-11-2024(online)].pdf 2024-11-25
16 202421091871-COMPLETE SPECIFICATION [25-11-2024(online)].pdf 2024-11-25
17 Abstract.jpg 2024-12-13
18 202421091871-FORM-26 [03-06-2025(online)].pdf 2025-06-03