Abstract: REAL-TIME FAKE INFORMATION ALERT WITHIN BROWSER OR SOCIAL MEDIA PLATFORMS FOR CONTENT EVALUATION Abstract The present disclosure relates to identify and alert a user about fake information in real-time within browsers or social media platforms. The system comprises the information scan module which continuously monitors and scans the content accessed by a user on said platforms. The fake information detection module evaluates the scanned data against predefined criteria to identify potential disinformation. Upon detection of fake information, the alert generation module is activated to create an immediate notification. Further, the user interface module is arranged for displaying said real-time alert directly within the user's browser or on the social media platform interface. Fig. 1
Description:REAL-TIME FAKE INFORMATION ALERT WITHIN BROWSER OR SOCIAL MEDIA PLATFORMS FOR CONTENT EVALUATION
Field of the Invention
[0001] The present study pertains to the field of digital content evaluation and cybersecurity, more specifically to a system and method for providing real-time alert for fake information within web browsers and social media platforms.
Background
[0002] The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] In the domain of digital information dissemination, particularly within browsers and social media platforms, the rapid propagation of fake information presents a significant challenge. The advent of the internet and the subsequent evolution of social media platforms have exponentially increased the speed and volume at which information is shared and consumed. Said advancement, while beneficial in many respects, has also paved the way for the spread of disinformation and misinformation at an unprecedented rate.
[0004] Historically, efforts to combat the spread of fake information have primarily focused on post-hoc corrective measures, such as fact-checking services and user-reporting mechanisms. Said approaches, although valuable, suffer from inherent limitations. For instance, fact-checking services, being largely manual in nature, are unable to keep pace with the volume and velocity of information generated on digital platforms. Said delay between the dissemination of information and the verification often leads to the widespread acceptance of fake information before corrective measures can be effectively implemented. Similarly, user-reporting mechanisms rely heavily on the user’s ability and willingness to identify and report false information, a process fraught with subjectivity and inconsistency.
[0005] Furthermore, existing solutions typically operate outside the ecosystem of the browser or social media platform, necessitating additional steps for the user to verify the authenticity of information. Said separation not only reduces the efficiency of the fake information identification process but also diminishes user engagement with such corrective tools.
[0006] Additionally, current methods for detecting fake information often lack the necessary sophistication to accurately distinguish between false and controversial but true information. The reliance on keyword-based filtering or simplistic algorithms has resulted in a high rate of false positives and negatives, undermining the credibility and effectiveness of said solutions. Said inadequacy is particularly pronounced in the context of dynamically evolving narratives and linguistically nuanced content, where the context and intent behind the information are crucial for accurate classification.
[0007] The lack of real-time intervention in the existing systems is a critical drawback. The time lapse between the publication of information and the identification as fake allows for substantial user exposure, potentially leading to widespread misinformation before any corrective action is taken. Said delay is particularly detrimental in situations where immediate dissemination of accurate information is crucial, such as during public health emergencies or political events.
[0008] Moreover, the absence of a seamless and integrated user interface for alerting the user about fake information within the browser or social media platform further hampers the effectiveness of existing solutions. Said user may often require to navigate away from their current digital environment to access information verification tools, a process that is both disruptive and time-consuming. Such disintegration results in lower user engagement and diminishes the overall impact of said tools in combating the spread of fake information.
[0009] Therefore, the drawbacks and disadvantages of prior art in the field of real-time fake information detection systems are marked by delays in response, reliance on manual and subjective processes, lack of sophisticated detection mechanisms, absence of real-time interventions, and a disjointed user experience. Said limitations underscore the need for a more integrated, efficient, and accurate system to address the growing challenge of fake information in the digital age. Thus, there exists a need in the art for a system to provide a real-time fake information alert within a browser or a social media platform
Summary
[00010] The present study pertains to the field of digital content evaluation and cybersecurity, more specifically to a system and method for providing real-time alert for fake information within web browsers and social media platforms.
[00011] The following presents a simplified summary of various aspects of this disclosure in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements nor delineate the scope of such aspects. Its purpose is to present some concepts of this disclosure in a simplified form as a prelude to the more detailed description that is presented later.
[00012] The following paragraphs provide additional support for the claims of the subject application.
[00013] In the proposed system, an approach is introduced for providing real-time alert regarding fake information within browsers and social media platforms. Said system is comprised of several interconnected modules, each fulfilling a specific role in the process of identifying and alerting the user about disinformation.
[00014] Said system comprises the information scan module that is tasked with scanning content accessed by a user on either the browser or the social media platform. Said module is designed to meticulously analyze both textual and multimedia content, leveraging advanced natural language processing capabilities and image recognition algorithms. The primary objective of said module is to detect nuances and context that may indicate the presence of fake information.
[00015] Subsequent to the information scanning process, the fake information detection module comes into play. Said module is equipped with machine learning algorithms that are continuously refined and updated with new patterns and criteria. The purpose of such an algorithmic approach is to enhance the accuracy of fake information identification, thus reducing false positives and improving reliability.
[00016] Upon the detection of fake information, the alert generation module is activated. Said module is uniquely configured not only to generate real-time alert but also to categorize the level of severity of the detected fake information. The categorization allows for the modification of the alert in accordance with the seriousness of the disinformation, thereby providing a nuanced response to different types of fake content.
[00017] The user interface module represents the final component of the system. Said interface module is responsible for displaying the real-time alert on the browser or social media platform. Significantly, said module includes customizable settings that permit the user to adjust the sensitivity and specificity of the alert according to their individual preferences. Such customization ensures that the system is adaptable to varying user needs and thresholds for information authenticity.
[00018] Thus, the system is a solution for the real-time detection and alerting of fake information on digital platforms. Said system integrates sophisticated scanning and detection technologies with user-centric customization, thereby offering an advanced, adaptable, and reliable tool for combating the spread of disinformation in the digital age. The combination of said features positions the system as a significant advancement over existing methods in the field of information authenticity and security.
[00019] A method for providing real-time alert on fake information within browsers and social media platforms has been developed. Said method comprises several critical steps, each contributing to the effective identification and communication of disinformation to the user.
[00020] The initial step involves scanning information accessed by a user on the browser or social media platform. In said phase, natural language processing techniques are employed to analyze the textual content. The objective of such analysis is to identify indicators of fake information, which often include linguistic patterns and contextual cues that differ from verified information.
[00021] Following the scanning process, the method involves the identification of fake information. Said step is characterized by the use of sophisticated machine learning algorithms. Said algorithms are continually trained and updated with new patterns and criteria. The purpose of such continual training is to enhance the detection accuracy of the system, thereby reducing instances of false positives and improving the overall reliability of the method.
[00022] Once fake information is identified, the method proceeds to generate a real-time alert. A notable feature of said step is the categorization of the severity of the detected disinformation. Said categorization enables the tailoring of the alert’s presentation based on the level of severity. Such tailored alert ensure that the user receives information that is not only timely but also reflective of the seriousness of the disinformation.
[00023] The final step in the method involves displaying the real-time alert on the browser or social media platform. Said step is crucial in ensuring that the user is immediately informed about the presence of fake information in the content they are accessing. The prompt display of said alert plays a significant role in preventing the spread and acceptance of fake information by the user.
[00024] Hence, the method presents an effective approach to combating the spread of fake information in digital spaces. By integrating advanced techniques such as natural language processing and machine learning algorithms, and by focusing on the immediate communication of alert to the user, the method addresses key challenges in the domain of digital information authenticity. The method represents a significant advancement in ensuring the reliability and truthfulness of information consumed on the internet.
Brief Description of the Drawings
[00025] The features and advantages of the present disclosure would be more clearly understood from the following description taken in conjunction with the accompanying drawings in which:
[00026] FIG. 1 represents an architecture of a system for providing a real-time fake information alert within a browser or a social media platform, in accordance with the embodiments of the present disclosure.
[00027] FIG. 2 illustrates a flow diagram of a method for providing a real-time fake information alert within a browser or a social media platform, in accordance with the embodiments of the present disclosure.
[00028] FIG. 3 illustrates a working flow of real-time disinformation alert, in accordance with the embodiments of the present disclosure.
Detailed Description
[00029] In the following detailed description of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to claim those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof.
[00030] The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
[00031] The present study pertains to the field of digital content evaluation and cybersecurity, more specifically to a system and method for providing real-time alert for fake information within web browsers and social media platforms.
[00032] Pursuant to the "Detailed Description" section herein, whenever an element is explicitly associated with a specific numeral for the first time, such association shall be deemed consistent and applicable throughout the entirety of the "Detailed Description" section, unless otherwise expressly stated or contradicted by the context.
[00033] In the field of information technology, particularly in the area of internet and social media usage, a significant concern has been noted regarding the proliferation of fake information. Said concern has necessitated the development of systems capable of addressing and mitigating the impact of such disinformation. To mitigate said impact, a system 100 has been devised for providing a real-time fake information alert within a browser or a social media platform. The system 100 aims to enhance the reliability of information consumed by the user and to foster a more informed and truthful digital environment.
[00034] Referring to the preceding embodiment, the system 100 comprises several integral components, each playing a pivotal role in the overall functionality of the system. According to a figurative elucidation of FIG. 1, showcasing an architectural composition of the system 100 that can comprise functional elements, yet not limited to an information scan module 102, a fake information detection module 104, an alert generation module 106, and a user interface module 108. The synergy of said modules ensures the efficient detection and alerting of the user regarding the presence of fake information in their digital interactions.
[00035] In yet another embodiment, the information scan module is responsible for scanning the information accessed by a user on the browser or the social media platform. Said scanning is conducted in real-time, ensuring that the information being consumed by the user is continuously monitored for authenticity. The efficacy of said module lies in the ability to parse through vast amounts of data rapidly and accurately, identifying potential sources of disinformation.
[00036] In yet another embodiment, upon the scanning of information, the fake information detection module plays a crucial role. Said module identifies fake information based on matching the scanned data with pre-stored criteria. Said criteria are meticulously developed and regularly updated to encompass a wide array of fake information characteristics. Said update ensures that the detection module remains effective in the ever-evolving landscape of digital disinformation.
[00037] In yet another embodiment, once said disinformation is detected, the alert generation module comes into action. Said module is responsible for generating real-time alert. The timeliness of said alert is critical in allowing the user to be immediately informed about the potential falsity of the information they are accessing. The ability of said module to operate in real-time is a key feature that distinguishes said system from other, more traditional methods of disinformation detection.
[00038] In yet another embodiment, the user interface module displays the real-time alert on the browser or the social media platform. Said module is designed to be intuitive and user-friendly, ensuring that the said alert is easily understood and actionable. The integration of said module with the browser or social media platform is seamless, providing a non-intrusive yet effective method of alerting the user to fake information.
[00039] Further enhancements to said system are embodied in various additional configurations. For instance, the information scan module includes advanced natural language processing capabilities. Said capabilities allow the module to analyze textual content within the browser or social media platform for nuances and context that may indicate the presence of fake information. Said capabilities of analysis is detailed, going beyond mere keyword matching to understand the subtleties of language that may betray the presence of disinformation.
[00040] In yet another embodiment, the fake information detection module employs machine learning algorithms that are continuously updated with new patterns and criteria. Said ongoing update process ensures that the module remains effective against new and emerging forms of fake information. The machine learning algorithms are trained on vast datasets, allowing them to discern complex patterns that are indicative of fake information.
[00041] In yet another embodiment, in the alert generation module, an additional feature has been incorporated to categorize the level of severity of detected fake information and to modify the alert accordingly. Said categorization allows the user to understand not just the presence of fake information, but also the potential impact or severity. Such categorization is beneficial in helping the user to prioritize their response to said alert.
[00042] In yet another embodiment, the user interface module includes customizable settings, allowing the user to adjust the sensitivity and specificity of the real-time alert. Said customization ensures that the system remains flexible and adaptable to individual user needs. Said user can choose to receive alert for all detected fake information or only for information that meets certain severity or relevance criteria.
[00043] In yet another embodiment, the information scan module is further configured to analyze multimedia content, including images and videos, using image recognition algorithms. Said capability is essential in the current digital age, where much of the information consumed is in multimedia formats. The ability to analyze such content for potential fake information broadens the scope and effectiveness of the system.
[00044] Referring to one or more preceding embodiments, the described system 100 represents an approach to tackling the issue of fake information in digital platforms. Through the various modules and enhanced capabilities, said system offers a robust solution to ensure the authenticity and reliability of information consumed by the user in an increasingly digital world. The system stands as a testament to the advancements in technology aimed at fostering a more truthful and informed society.
[00045] Presented herein a detailed description of a method 200 for providing a real-time fake information alert within a browser or a social media platform is provided. Said method 200, characterized by a series of steps, is aimed at enhancing the reliability of information accessed by said user in digital environments such as web browsers and social media platforms. The method 200 comprises several steps, each meticulously designed to contribute to the overall objective of identifying and alerting said user about fake information in real-time.
representing a flow diagram of the method 200 that can comprise steps of, yet not restricted to, (at step 202) scanning information accessed by a user, (at step 204) identifying fake information, (at step 206) generating the real-time alert, and (at step 208) displaying said real-time alert on said browser or the social media platform. Said steps of the method 200 can be performed or executed, collectively or selectively, randomly or sequentially or in a combination thereof, in accordance with the embodiments of current disclosure.
[00047] In yet another embodiment, the first step of the method involves scanning information accessed by a user on the browser or the social media platform. Said scanning is a critical component in serving as the foundation for detecting fake information. Said scanning is performed in real-time, ensuring that the information consumed by said user is continuously monitored. The scanning process must be detailed, encompassing various types of digital content including but not limited to text, images, and videos. The complexity of said step lies in the requirement to process vast amounts of data rapidly and accurately.
[00048] In yet another embodiment, the method involves identifying fake information based on matching the scanned data with pre-stored criteria. Said criteria are a collection of characteristics, patterns, and indicators that have been identified as common attributes of fake information. The identification process is sophisticated, involving a comparison of the scanned data against said criteria to ascertain the authenticity of the information. Said step is crucial for ensuring that only genuine instances of fake information are flagged.
[00049] In yet another embodiment, upon the detection of disinformation, the method includes a step for generating the real-time alert. The generation of said alert is immediate, which is vital for prompt user notification. The alert provides said user with information regarding the potential falsity of the content they are accessing. Said step involves not just the creation of an alert but also the customization based on the severity and nature of the detected disinformation.
[00050] In yet another embodiment, the final step of the method can display the real-time alert on the browser or the social media platform. Said display is integral to the method, wherein said display renders the point of interaction with the user. The alert must be designed in a manner that is easily noticeable yet non-intrusive to the user experience. The effectiveness of said step is contingent on the alert's visibility and clarity, ensuring that said user can understand and act upon the information provided.
[00051] In yet another embodiment, the step of scanning information includes employing advanced natural language processing (NLP) techniques. Said techniques are employed to analyze the textual content accessed by a user for indicators of fake information. NLP allows for a nuanced analysis of text, identifying subtle indicators of falsehoods that may not be apparent through simple keyword matching.
[00052] In yet another embodiment, the identification of fake information incorporates the use of machine learning algorithms. Said algorithms are continually trained on updated patterns and criteria, enhancing the detection accuracy over time. The machine learning aspect allows the system to adapt and improve, recognizing new forms of fake information as they emerge.
[00053] In the step of generating the real-time alert, there is a categorization of the severity of detected disinformation. Said categorization enables the tailoring of the alert's presentation based on the level of severity. Said alert can thus be prioritized, with more severe instances of disinformation being highlighted more prominently.
[00054] Referring to one or more preceding embodiments, the described method 200 offers a dynamic approach to combating the issue of fake information in digital platforms. Through the sequential steps and the integration of advanced technologies such as NLP and machine learning, the method ensures that said user receives timely and accurate notifications about fake information, thus fostering a more reliable and truthful digital information landscape. Said method represents a significant advancement in the field of digital information authenticity, providing a robust solution to a growing concern in the era of digital communication and media.
[00055] Referring to the preceding embodiment, the method 200 addresses the growing concern over the spread of disinformation and misinformation online, which can have significant societal, personal, and political impacts. Said method encompasses the development of advanced algorithms that can detect, analyze, and notify the user about potentially false or misleading content as they interact with said content in real-time on various digital platforms.
[00056] Referring to the preceding embodiment, said method 200 includes the integration of natural language processing, machine learning, pattern recognition, and user interface design to create a robust system capable of enhancing the reliability of information consumption in an increasingly digital world. Said method can be implied in the domains of online media, digital communication, and internet safety, offering tools for said user and platforms alike to combat the challenges posed by the spread of fake information online.
[00057] FIG. 3 illustrates a working flow of real-time disinformation alert, in accordance with the embodiments of the present disclosure.
[00058] Referring to the figure 3, method for implementing a real-time disinformation alert system within browser and social media platforms for content evaluation is depicted. The process commences when a user encounters content on a browser or social media platform. Subsequently, content verification is initiated to check for disinformation. In the event that said content is determined to be potentially disinformation, an alert for the user is generated, herein referred to as a disinformation flag. Following the generation of said alert, the alert is displayed to the user with an accompanying explanation.
[00059] The method further queries whether the user wishes to continue with the engagement of the content. In the case of the user deciding not to continue, a determination is made regarding whether the user opts to review the content. If the user declines to review, the alert process is terminated. Conversely, if the user elects to review, alternative sources and fact-check information are provided to the user, culminating in the conclusion of the decision-making process. This method ensures that users are informed of potential disinformation in real-time, enabling them to make educated decisions regarding their consumption of content on digital platforms.
[00060] Example embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatuses. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including hardware, software, firmware, and a combination thereof. For example, in one embodiment, each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations can be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
[00061] Throughout the present disclosure, the term ‘processing means’ or ‘microprocessor’ or ‘processor’ or ‘processors’ includes, but is not limited to, a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
[00062] The term “non-transitory storage device” or “storage” or “memory,” as used herein relates to a random access memory, read only memory and variants thereof, in which a computer can store data or software for any duration.
[00063] Operations in accordance with a variety of aspects of the disclosure is described above would not have to be performed in the precise order described. Rather, various steps can be handled in reverse order or simultaneously or not at all.
[00064] While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
Claims
I/We Claim:
1. A system for providing a real-time fake information alert within a browser or a social media platform, said system comprising:
an information scan module scans information accessed by a user on the browser or the social media platform;
a fake information detection module identifies fake information based on matching the scanned data with a prestored criteria;
an alert generation module generates the real-time alert, when disinformation is detected; and
a user interface module displays said real-time alert on said browser or the social media platform.
2. The system of claim 1, wherein said information scan module includes advanced natural language processing capabilities to analyze textual content within said browser or social media platform for nuances and context that may indicate the fake information.
3. The system of claim 1, wherein said fake information detection module employs machine learning algorithms that are continuously updated with new patterns and criteria to enhance the accuracy of fake information identification.
4. The system of claim 1, wherein said alert generation module is further configured to categorize the level of severity of detected fake information and to modify the alert accordingly.
5. The system of claim 1, wherein said user interface module includes customizable settings allowing the user to adjust the sensitivity and specificity of the real-time alert.
6. The system of claim 1, wherein said information scan module is further configured to analyze multimedia content, including images and videos, using image recognition algorithms for potential fake information.
7. A method for providing a real-time fake information alert within a browser or a social media platform, said method comprising:
scanning information accessed by a user on the browser or the social media platform;
identifying fake information based on matching the scanned data with a prestored criteria;
generating the real-time alert, when disinformation is detected; and
displaying said real-time alert on said browser or the social media platform.
8. The method of claim 7, wherein the step of scanning information includes employing natural language processing techniques to analyze the textual content accessed by a user on the browser or the social media platform for indicators of fake information.
9. The method of claim 7, wherein the step of identifying fake information includes the use of machine learning algorithms, wherein said algorithms are continually trained on updated patterns and criteria to enhance detection accuracy.
10. The method of claim 7, wherein the step of generating the real-time alert includes categorizing the severity of detected disinformation and tailoring the alert's presentation based on the level of severity.
REAL-TIME FAKE INFORMATION ALERT WITHIN BROWSER OR SOCIAL MEDIA PLATFORMS FOR CONTENT EVALUATION
Abstract
The present disclosure relates to identify and alert a user about fake information in real-time within browsers or social media platforms. The system comprises the information scan module which continuously monitors and scans the content accessed by a user on said platforms. The fake information detection module evaluates the scanned data against predefined criteria to identify potential disinformation. Upon detection of fake information, the alert generation module is activated to create an immediate notification. Further, the user interface module is arranged for displaying said real-time alert directly within the user's browser or on the social media platform interface.
Fig. 1 , C , C , Claims:Claims
I/We Claim:
1. A system for providing a real-time fake information alert within a browser or a social media platform, said system comprising:
an information scan module scans information accessed by a user on the browser or the social media platform;
a fake information detection module identifies fake information based on matching the scanned data with a prestored criteria;
an alert generation module generates the real-time alert, when disinformation is detected; and
a user interface module displays said real-time alert on said browser or the social media platform.
2. The system of claim 1, wherein said information scan module includes advanced natural language processing capabilities to analyze textual content within said browser or social media platform for nuances and context that may indicate the fake information.
3. The system of claim 1, wherein said fake information detection module employs machine learning algorithms that are continuously updated with new patterns and criteria to enhance the accuracy of fake information identification.
4. The system of claim 1, wherein said alert generation module is further configured to categorize the level of severity of detected fake information and to modify the alert accordingly.
5. The system of claim 1, wherein said user interface module includes customizable settings allowing the user to adjust the sensitivity and specificity of the real-time alert.
6. The system of claim 1, wherein said information scan module is further configured to analyze multimedia content, including images and videos, using image recognition algorithms for potential fake information.
7. A method for providing a real-time fake information alert within a browser or a social media platform, said method comprising:
scanning information accessed by a user on the browser or the social media platform;
identifying fake information based on matching the scanned data with a prestored criteria;
generating the real-time alert, when disinformation is detected; and
displaying said real-time alert on said browser or the social media platform.
8. The method of claim 7, wherein the step of scanning information includes employing natural language processing techniques to analyze the textual content accessed by a user on the browser or the social media platform for indicators of fake information.
9. The method of claim 7, wherein the step of identifying fake information includes the use of machine learning algorithms, wherein said algorithms are continually trained on updated patterns and criteria to enhance detection accuracy.
10. The method of claim 7, wherein the step of generating the real-time alert includes categorizing the severity of detected disinformation and tailoring the alert's presentation based on the level of severity.
| # | Name | Date |
|---|---|---|
| 1 | 202421001768-REQUEST FOR EXAMINATION (FORM-18) [10-01-2024(online)].pdf | 2024-01-10 |
| 2 | 202421001768-REQUEST FOR EARLY PUBLICATION(FORM-9) [10-01-2024(online)].pdf | 2024-01-10 |
| 3 | 202421001768-POWER OF AUTHORITY [10-01-2024(online)].pdf | 2024-01-10 |
| 4 | 202421001768-OTHERS [10-01-2024(online)].pdf | 2024-01-10 |
| 5 | 202421001768-FORM-9 [10-01-2024(online)].pdf | 2024-01-10 |
| 6 | 202421001768-FORM FOR SMALL ENTITY(FORM-28) [10-01-2024(online)].pdf | 2024-01-10 |
| 7 | 202421001768-FORM 18 [10-01-2024(online)].pdf | 2024-01-10 |
| 8 | 202421001768-FORM 1 [10-01-2024(online)].pdf | 2024-01-10 |
| 9 | 202421001768-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [10-01-2024(online)].pdf | 2024-01-10 |
| 10 | 202421001768-EDUCATIONAL INSTITUTION(S) [10-01-2024(online)].pdf | 2024-01-10 |
| 11 | 202421001768-DRAWINGS [10-01-2024(online)].pdf | 2024-01-10 |
| 12 | 202421001768-DECLARATION OF INVENTORSHIP (FORM 5) [10-01-2024(online)].pdf | 2024-01-10 |
| 13 | 202421001768-COMPLETE SPECIFICATION [10-01-2024(online)].pdf | 2024-01-10 |
| 14 | Abstact.jpg | 2024-02-13 |
| 15 | 202421001768-RELEVANT DOCUMENTS [01-10-2024(online)].pdf | 2024-10-01 |
| 16 | 202421001768-POA [01-10-2024(online)].pdf | 2024-10-01 |
| 17 | 202421001768-FORM 13 [01-10-2024(online)].pdf | 2024-10-01 |
| 18 | 202421001768-FER.pdf | 2025-05-19 |
| 19 | 202421001768-FORM 3 [02-07-2025(online)].pdf | 2025-07-02 |
| 20 | 202421001768-FORM-8 [25-07-2025(online)].pdf | 2025-07-25 |
| 21 | 202421001768-FER_SER_REPLY [25-07-2025(online)].pdf | 2025-07-25 |
| 22 | 202421001768-DRAWING [25-07-2025(online)].pdf | 2025-07-25 |
| 23 | 202421001768-CORRESPONDENCE [25-07-2025(online)].pdf | 2025-07-25 |
| 24 | 202421001768-COMPLETE SPECIFICATION [25-07-2025(online)].pdf | 2025-07-25 |
| 25 | 202421001768-CLAIMS [25-07-2025(online)].pdf | 2025-07-25 |
| 1 | 202421001768E_03-04-2024.pdf |