Abstract: Disclosed is a method for enhancing security in a virtual environment, which includes initializing a digital twin of an avatar, an object, or an environment within a metaverse. The method involves collecting real-time data corresponding to the avatar, object, or environment and updating the digital twin with this real-time data. A deepfake detection algorithm is initialized and used to continuously monitor the digital twin. Upon detecting a discrepancy between the digital twin and the real-time data indicative of deepfake content, an alert is generated. Subsequent to generating the alert, preventive measures are implemented in response to the alert to ensure the integrity of the virtual environment. Drawings / FIG. 1 / FIG. 2 / FIG. 3
Description:.
DIGITAL TWIN-ENABLED DEEPFAKE DETECTION FOR SECURING THE METAVERSE
Field of the Invention
Generally, the present disclosure relates to digital security technologies. Particularly, the present disclosure relates to enhancing security within a virtual environment by detecting and responding to deepfake content.
Background
The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
The virtual environment, particularly those referred to as the metaverse, has seen rapid advancement, leading to an increase in the digital representation of elements such as avatars, objects, and environmental settings. The creation of digital twins, which are virtual replicas of these elements, has become a common practice, enhancing user interaction and experience within such environments. These digital twins are often updated with real-time data to maintain accuracy and improve the immersive experience.
With the rise in sophistication of cyber threats, security within virtual environments has become paramount. Among these threats, deepfake technology, which can manipulate digital representations to create deceptive and often undetectable alterations, poses a significant risk. Deepfakes can compromise the integrity of digital twins by creating discrepancies between the virtual representation and its real-world counterpart.
Traditionally, security measures have relied on static defenses that fail to adapt to the dynamic nature of deepfakes. The lack of continuous monitoring systems capable of detecting subtle alterations has been a notable drawback in maintaining security. Additionally, the reactive nature of most security protocols results in delayed responses to threats, which is inadequate in the fast-paced virtual environments where deepfakes can spread rapidly.
Consequently, there has been an urgent need for a method that can not only detect deepfakes proactively but also initiate immediate preventive measures to mitigate potential threats. Such a method would ideally employ continuous monitoring techniques with advanced algorithms capable of identifying discrepancies that indicate the presence of deepfake content. Upon detection, the method would generate alerts and implement immediate countermeasures to maintain the security and authenticity of the virtual environment. This need underscores the importance of developing robust, real-time security solutions that can keep pace with the evolving threats within the metaverse and other virtual spaces.
Summary
The following presents a simplified summary of various aspects of this disclosure in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements nor delineate the scope of such aspects. Its purpose is to present some concepts of this disclosure in a simplified form as a prelude to the more detailed description that is presented later.
The following paragraphs provide additional support for the claims of the subject application.
In an aspect, the present disclosure aims to provide a method for enhancing security within a virtual environment. The method encompasses initializing a digital twin in a metaverse, which includes an avatar, an object, or an environment. The method involves the collection and updating of real-time data pertaining to the digital twin, including user movements, interactions, and expressions. A deepfake detection algorithm is employed to continuously monitor the digital twin, analyzing behavior and appearance to identify discrepancies indicative of deepfake content. Upon detection, an alert is generated to notify users or moderators within the metaverse of the potential threat.
Further, the method includes implementing adaptive learning mechanisms. These mechanisms refine the deepfake detection algorithm by analyzing past detected deepfake threats and user feedback, improving future detection capabilities. Preventive measures are then executed in response to the alerts, which may include suspending the affected digital twin or restricting interactions within the metaverse.
Moreover, the present disclosure provides a system for enhancing security in a virtual environment. This system comprises a digital twin initialization unit, a data collection module, a digital twin update module, and a deepfake detection processing unit. It also includes a monitoring module, an alert generation module, and a preventive measure execution module. The system is designed to create and update digital twins with real-time data, employ deepfake detection, generate alerts, and enforce necessary actions to counter potential security threats.
Brief Description of the Drawings
The features and advantages of the present disclosure would be more clearly understood from the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates a method (100) for enhancing security in a virtual environment, in accordance with the embodiments of the present disclosure.
FIG. 2 illustrates a system (200) configured for enhancing security in a virtual environment, in accordance with the embodiments of the present disclosure.
FIG. 3 illustrates a comprehensive workflow sequence diagram, in accordance with the embodiments of the present disclosure.
Detailed Description
In the following detailed description of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to claim those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof.
The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
Pursuant to the "Detailed Description" section herein, whenever an element is explicitly associated with a specific numeral for the first time, such association shall be deemed consistent and applicable throughout the entirety of the "Detailed Description" section, unless otherwise expressly stated or contradicted by the context.
FIG. 1 illustrates a method (100) for enhancing security in a virtual environment, in accordance with the embodiments of the present disclosure. The method (100) for enhancing security in a virtual environment relates to a process designed to improve the safety and integrity of digital spaces, particularly in contexts such as the metaverse, by identifying and mitigating risks associated with digital impersonation or manipulation techniques like deepfakes. The method (100) comprises several steps, each crucial for the operation of the security-enhancing mechanism. In step (102), the method involves initializing a digital twin for elements such as an avatar, an object, or an environment within the metaverse. The initializing a digital twin pertains to the creation of a digital replica of physical entities within a virtual environment. The step (102) is fundamental for establishing a baseline for real-time monitoring and analysis. Following the initialization step (104), real-time data corresponding to the digital twin's physical counterpart is collected. This involves gathering information that reflects the current state and activities of the avatar, object, or environment, enabling the method to monitor changes or anomalies that may indicate security threats.
The collected real-time data are then utilized to update the digital twin in step (106), ensuring that the digital replica accurately represents its physical counterpart's latest status. Subsequently in step (108), a deepfake detection algorithm is initialized. This algorithm is specifically designed to identify manipulations or falsifications in digital content, distinguishing genuine alterations from those created to deceive or cause harm. In step (110), the continuous monitoring of the digital twin using this deepfake detection algorithm forms the core of the method's security measures, allowing for the early detection of potential threats. Upon identifying a discrepancy between the digital twin and the real-time data that suggests the presence of deepfake content, an alert is generated in step (112). This alert signifies the detection of a potential security breach, prompting immediate attention. In step (114), the method involves implementing preventive measures in response to the alert, which may include steps to neutralize the threat, safeguard the integrity of the virtual environment, and prevent similar incidents in the future. Optionally, the method may incorporate additional steps or features, such as user verification processes, to further enhance security. Working examples include the application of this method in online gaming environments, where the integrity of avatars and objects is critical for fair play and user experience.
In an embodiment, the method (100) incorporates collecting real-time data that includes movements, interactions, and expressions of a user within the metaverse. The detailed real-time data provides a dynamic and responsive aspect to the security protocol. By continuously gathering this information, the system ensures that the digital twin remains an accurate reflection of the user's activities within the virtual environment. This real-time data serves as a cornerstone for monitoring activities and potential security breaches, with movements, interactions, and expressions offering a comprehensive dataset for detecting irregularities that may indicate the presence of deepfake content.
In another embodiment, the method (100) further involves updating the digital twin by synchronizing facial expressions and body movements of the avatar with the user in the metaverse. This synchronization process is meticulous, taking into account the nuances of human expression and motion to ensure the digital twin operates with lifelike accuracy. The refinement of the digital twin through such synchronization allows for a heightened level of surveillance and the early detection of anomalies that might suggest manipulation or the creation of deepfake content, thereby bolstering the security framework within the metaverse.
An additional embodiment, wherein the deepfake detection algorithm is utilized to analyze behavior and appearance. By comparing real-time data against the established baseline of the digital twin, the algorithm diligently identifies discrepancies. The efficacy of the deepfake detection algorithm hinges on its ability to discern subtle inconsistencies in behavior or appearance that may elude casual observation. This level of analysis is crucial for preempting malicious activities and protecting the integrity of the virtual environment.
In an embodiment, the method (100) includes an alert generation process upon the detection of potential deepfake threats. The alerts serve a dual purpose: they act as an immediate warning to users and moderators within the metaverse, and they initiate a protocol for the swift enactment of preventive measures. The prompt notification system is essential in a landscape where timely response is critical to security.
In yet another embodiment, the method (100) integrates adaptive learning mechanisms. These mechanisms analyze patterns and feedback from detected deepfake threats and user interactions to refine the detection algorithms continuously. Such adaptive learning underlines the evolving nature of the security system, which learns from each incident and feedback loop to enhance its predictive and detective capabilities.
In another embodiment, the method (100) includes an enhancement to the adaptive learning mechanisms. These mechanisms are fine-tuned based on the ongoing analysis of detected deepfake threats and user feedback, ensuring that the deepfake detection algorithm remains at the forefront of technological advancements. This proactive adaptation is vital for maintaining an edge over increasingly sophisticated deepfake techniques.
The system (200) for enhancing security in a virtual environment engages a series of modules designed to create, update, and safeguard the integrity of digital entities within the metaverse.
The term "digital twin initialization unit" as used throughout the present disclosure relates to the system's component configured to create digital representations of avatars, objects, or environments within a metaverse. The digital twin initialization unit serves as the foundational element in establishing a secure virtual environment.
The term "data collection module" as used throughout the present disclosure relates to the system's component tasked with gathering real-time data from users within the metaverse. This module collects data regarding behavioral, interactive, and expressive attributes, which are crucial for maintaining the accuracy and integrity of the digital twins.
The term "digital twin update module" as used throughout the present disclosure relates to the system's component responsible for integrating and synchronizing the collected real-time data with the corresponding digital twin. This ensures that the virtual representations remain true to their real-world counterparts.
The term "deepfake detection processing unit" as used throughout the present disclosure relates to the system's hardware component, including one or more processors and memory, which stores processor-executable instructions for executing a deepfake detection algorithm. This unit is the crux of the security system, enabling the identification and analysis of potential deepfake content.
The term "monitoring module" as used throughout the present disclosure relates to the component coupled with the deepfake detection processing unit, configured for the continuous surveillance of digital twins. This module's role is pivotal in detecting signs of manipulation by comparing the real-time data with the stored digital twin data.
The term "alert generation module" as used throughout the present disclosure relates to the system's component designed to notify users or moderators within the metaverse of detected discrepancies that could indicate the presence of deepfake content. This module acts as an early warning system to prompt immediate attention and action.
The term "preventive measure execution module" as used throughout the present disclosure relates to the system's component that enforces actions in response to generated alerts. Such actions may include temporarily suspending the affected avatar or restricting interactions within the metaverse, thereby preventing the spread or impact of deepfake content.
FIG. 2 illustrates a system (200) configured for enhancing security in a virtual environment, in accordance with the embodiments of the present disclosure. The system (200) configured for enhancing security in a virtual environment comprises a suite of specialized modules. These modules work in concert to establish a robust and responsive security apparatus within the metaverse. The digital twin initialization unit (202), data collection module (204), and digital twin update module (206) form the foundation for real-time monitoring and updating. The deepfake detection processing unit (208) and monitoring module (210) provide the necessary surveillance and detection capabilities. Alert generation (212) and preventive measure execution (214) modules complete the system, offering a comprehensive solution for maintaining the security and authenticity of virtual interactions.
FIG. 3 illustrates a comprehensive workflow sequence diagram, in accordance with the embodiments of the present disclosure. The workflow sequence diagram which outlines a series of operations executed in a specific sequence to manage and counter deepfake incidents. The process commences with an 'Initialization' phase where the system is prepared for operation. Subsequent to this, 'real-time data collection', which involves the aggregation of data in real-time for analysis. This data is then utilized to update a 'digital twin', which serves as a dynamic virtual model of the authentic subject that the system protects. Upon updating the digital twin, the system progresses to 'deepfake detection initialization', laying the groundwork for the identification of deepfakes. This is followed by 'continuous monitoring', which is the persistent surveillance of the subject to detect any deviations indicative of deepfake activity. In parallel, there is a 'contextual offset' which is maintained to sync the real-time data with the digital twin, ensuring the system's accuracy in representing the subject's current state. When an anomaly suggestive of a deepfake is identified, the system transitions to 'deepfake detection', wherein the suspected impersonation is verified against the digital twin. If a deepfake is confirmed, 'alert generation' is activated, producing notifications to alert relevant stakeholders about the breach. The workflow extends to 'Preventive Measure Initialization & Implementation', whereby actions are taken to mitigate the impact of the detected deepfake. The system also encompasses 'adaptive learning initialization', a phase dedicated to adapting the detection mechanisms based on the latest deepfake strategies, thus enhancing the system’s efficacy. As part of the user experience, 'user feedback collection' is incorporated, enabling the system to gather input from users to refine and optimize performance. Finally, the 'cycle completion' phase marks the end of the workflow, with all processes coalescing to ensure the system is primed for subsequent cycles of detection and prevention.
Example embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatuses. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including hardware, software, firmware, and a combination thereof. For example, in one embodiment, each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations can be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
Throughout the present disclosure, the term ‘processing means’ or ‘microprocessor’ or ‘processor’ or ‘processors’ includes, but is not limited to, a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
The term “non-transitory storage device” or “storage” or “memory,” as used herein relates to a random access memory, read only memory and variants thereof, in which a computer can store data or software for any duration.
Operations in accordance with a variety of aspects of the disclosure is described above would not have to be performed in the precise order described. Rather, various steps can be handled in reverse order or simultaneously or not at all.
While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
Claims
I/We claim:
A method (100) for enhancing security in a virtual environment, the method (100) comprising:
initializing a digital twin of at least one of an avatar, an object, and an environment within a metaverse;
collecting real-time data corresponding to the at least one of an avatar, an object, and an environment;
updating the digital twin with the real-time data;
initializing a deepfake detection algorithm;
continuously monitoring the digital twin using the deepfake detection algorithm;
generating an alert based on a detection of a discrepancy between the digital twin and the real-time data indicative of deepfake content; and
implementing preventive measures in response to the alert.
The method (100) of claim 1, wherein the real-time data includes at least one of movements, interactions, and expressions of a user within the metaverse.
The method (100) of claim 1, wherein updating the digital twin comprises synchronizing facial expressions and body movements of the avatar with the user in the metaverse.
The method (100) of any preceding claims, wherein the deepfake detection algorithm analyzes at least one of behavior and appearance of the digital twin to identify the discrepancies.
The method (100) of any preceding claims, wherein the generated alert notifies at least one of users and moderators within the metaverse of the potential deepfake threat.
The method (100) of any preceding claims, further comprising implementing adaptive learning mechanisms that analyze at least one of detected deepfake threats and user feedback to improve future detection algorithms.
The method (100) of claim 6, wherein the adaptive learning mechanisms refine the deepfake detection algorithm based on the analysis of the detected deepfake threats and user feedback.
A system (200) for enhancing security in a virtual environment, the system (200) comprising:
a digital twin initialization unit (202) configured to create a digital representation of at least one of an avatar, an object, and an environment within a metaverse;
a data collection module (204) configured to collect real-time behavioral, interactive, and expressive data from users within the metaverse;
a digital twin update module (206) configured to integrate and synchronize the collected real-time data with the corresponding digital twin;
a deepfake detection processing unit (208) comprising one or more processors and memory storing processor-executable instructions for executing a deepfake detection algorithm;
a monitoring module (210) coupled with the deepfake detection processing unit (208), configured to continuously monitor the digital twin for signs of manipulation by comparing real-time data with the digital twin data;
an alert generation module (212) configured to produce notifications for users or moderators within the metaverse upon the detection of discrepancies indicative of deepfake content; and
a preventive measure execution module (214) configured to enforce actions in response to alerts, such as temporarily suspending the affected avatar or restricting interactions within the metaverse.
DIGITAL TWIN-ENABLED DEEPFAKE DETECTION FOR SECURING THE METAVERSE
Disclosed is a method for enhancing security in a virtual environment, which includes initializing a digital twin of an avatar, an object, or an environment within a metaverse. The method involves collecting real-time data corresponding to the avatar, object, or environment and updating the digital twin with this real-time data. A deepfake detection algorithm is initialized and used to continuously monitor the digital twin. Upon detecting a discrepancy between the digital twin and the real-time data indicative of deepfake content, an alert is generated. Subsequent to generating the alert, preventive measures are implemented in response to the alert to ensure the integrity of the virtual environment.
Drawings
/
FIG. 1
/
FIG. 2
/
FIG. 3
, Claims:I/We claim:
A method (100) for enhancing security in a virtual environment, the method (100) comprising:
initializing a digital twin of at least one of an avatar, an object, and an environment within a metaverse;
collecting real-time data corresponding to the at least one of an avatar, an object, and an environment;
updating the digital twin with the real-time data;
initializing a deepfake detection algorithm;
continuously monitoring the digital twin using the deepfake detection algorithm;
generating an alert based on a detection of a discrepancy between the digital twin and the real-time data indicative of deepfake content; and
implementing preventive measures in response to the alert.
The method (100) of claim 1, wherein the real-time data includes at least one of movements, interactions, and expressions of a user within the metaverse.
The method (100) of claim 1, wherein updating the digital twin comprises synchronizing facial expressions and body movements of the avatar with the user in the metaverse.
The method (100) of any preceding claims, wherein the deepfake detection algorithm analyzes at least one of behavior and appearance of the digital twin to identify the discrepancies.
The method (100) of any preceding claims, wherein the generated alert notifies at least one of users and moderators within the metaverse of the potential deepfake threat.
The method (100) of any preceding claims, further comprising implementing adaptive learning mechanisms that analyze at least one of detected deepfake threats and user feedback to improve future detection algorithms.
The method (100) of claim 6, wherein the adaptive learning mechanisms refine the deepfake detection algorithm based on the analysis of the detected deepfake threats and user feedback.
A system (200) for enhancing security in a virtual environment, the system (200) comprising:
a digital twin initialization unit (202) configured to create a digital representation of at least one of an avatar, an object, and an environment within a metaverse;
a data collection module (204) configured to collect real-time behavioral, interactive, and expressive data from users within the metaverse;
a digital twin update module (206) configured to integrate and synchronize the collected real-time data with the corresponding digital twin;
a deepfake detection processing unit (208) comprising one or more processors and memory storing processor-executable instructions for executing a deepfake detection algorithm;
a monitoring module (210) coupled with the deepfake detection processing unit (208), configured to continuously monitor the digital twin for signs of manipulation by comparing real-time data with the digital twin data;
an alert generation module (212) configured to produce notifications for users or moderators within the metaverse upon the detection of discrepancies indicative of deepfake content; and
a preventive measure execution module (214) configured to enforce actions in response to alerts, such as temporarily suspending the affected avatar or restricting interactions within the metaverse.
DIGITAL TWIN-ENABLED DEEPFAKE DETECTION FOR SECURING THE METAVERSE
| # | Name | Date |
|---|---|---|
| 1 | 202421033174-OTHERS [26-04-2024(online)].pdf | 2024-04-26 |
| 2 | 202421033174-FORM FOR SMALL ENTITY(FORM-28) [26-04-2024(online)].pdf | 2024-04-26 |
| 3 | 202421033174-FORM 1 [26-04-2024(online)].pdf | 2024-04-26 |
| 4 | 202421033174-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-04-2024(online)].pdf | 2024-04-26 |
| 5 | 202421033174-EDUCATIONAL INSTITUTION(S) [26-04-2024(online)].pdf | 2024-04-26 |
| 6 | 202421033174-DRAWINGS [26-04-2024(online)].pdf | 2024-04-26 |
| 7 | 202421033174-DECLARATION OF INVENTORSHIP (FORM 5) [26-04-2024(online)].pdf | 2024-04-26 |
| 8 | 202421033174-COMPLETE SPECIFICATION [26-04-2024(online)].pdf | 2024-04-26 |
| 9 | 202421033174-FORM-9 [07-05-2024(online)].pdf | 2024-05-07 |
| 10 | 202421033174-FORM 18 [08-05-2024(online)].pdf | 2024-05-08 |
| 11 | 202421033174-FORM-26 [12-05-2024(online)].pdf | 2024-05-12 |
| 12 | 202421033174-FORM 3 [13-06-2024(online)].pdf | 2024-06-13 |
| 13 | 202421033174-RELEVANT DOCUMENTS [09-10-2024(online)].pdf | 2024-10-09 |
| 14 | 202421033174-POA [09-10-2024(online)].pdf | 2024-10-09 |
| 15 | 202421033174-FORM 13 [09-10-2024(online)].pdf | 2024-10-09 |
| 16 | 202421033174-FER.pdf | 2025-07-18 |
| 17 | 202421033174-FORM-8 [27-08-2025(online)].pdf | 2025-08-27 |
| 18 | 202421033174-FER_SER_REPLY [27-08-2025(online)].pdf | 2025-08-27 |
| 19 | 202421033174-DRAWING [27-08-2025(online)].pdf | 2025-08-27 |
| 20 | 202421033174-CORRESPONDENCE [27-08-2025(online)].pdf | 2025-08-27 |
| 21 | 202421033174-CLAIMS [27-08-2025(online)].pdf | 2025-08-27 |
| 22 | 202421033174-ABSTRACT [27-08-2025(online)].pdf | 2025-08-27 |
| 1 | 202421033174_SearchStrategyNew_E_SearchHistoryE_06-05-2025.pdf |