Abstract: A system (100) for gesture-controlled browser opening and method thereof. The system (100) includes a controller (108). The controlled (108) is configured to process each frame to obtain a RGB images. The controlled (108) is further configured to detect hand gesture from the RGB images by creating landmarks. The controlled (108) is further configured to analysing relationships between each landmarks. The controlled (108) is further configured to recognize gestures based on the analysed relationships between each landmarks. The controlled (108) is further configured to perform an action associated with the recognized gesture. FIG. 1
Description:SYSTEM FOR GESTURE-CONTROLLED BROWSER OPENING AND METHOD THEREOF
BACKGROUND
Technical Field
[0001] The embodiment herein generally relates to computer science and more particularly, to a system for gesture-controlled browser opening and a method thereof.
Description of the Related Art
[0002] Traditionally manual opening of browser tabs and enabling users to control their browser through mouse or keypad is inconvenient as physically challenged person can’t use mouse or keypads.
[0003] Accordingly, there remains a need for a system for gesture-controlled browser opening and a method thereof.
SUMMARY
[0004] In view of the foregoing, embodiments herein provide a system for gesture-controlled browser opening. The system includes a controller. The controlled is configured to process each frame to obtain a RGB images. The controlled is further configured to detect hand gesture from the RGB images by creating landmarks. The controlled is further configured to analysing relationships between each landmarks. The controlled is further configured to recognize gestures based on the analysed relationships between each landmarks. The controlled is further configured to perform an action associated with the recognized gesture.
[0005] In some embodiments herein, the controller is further configured to filter noisy data.
[0006] In some embodiments herein, the controller is further configured to determine hand orientation using a relative positions and angles between landmarks.
[0007] In some embodiments herein, the controller is further configured recognise custom gestures by calculating the distance between multiple fingers using the hand landmarks.
[0008] In some embodiments herein, the action includes open websites, zoom in/out, or control applications.
[0009] In an aspect of the embodiments herein provide a method for providing a system for gesture-controlled browser opening. The method includes processing, by the system each frame to obtain a RGB images. The method further includes detecting, by the system, hand gesture from the RGB images by creating landmarks. The method further includes analysing, by the system, relationships between each landmarks. The method further includes recognizing, by the system, gestures based on the analysed relationships between each landmark. The method further includes performing, by the system an action associated with the recognized gesture.
[00010] In some embodiments herein, the controller is further configured to filter noisy data.
[00011] In some embodiments herein, the controller is further configured to determine hand orientation using a relative positions and angles between landmarks.
[00012] In some embodiments herein, the controller is further configured recognise custom gestures by calculating the distance between multiple fingers using the hand landmarks.
[00013] In some embodiments herein, the action includes open websites, zoom in/out, or control applications.
[00014] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF THE DRAWINGS
[00015] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[00016] FIG. 1 illustrates a hardware components of a system for gesture-controlled browser opening, according to some embodiments herein; and
[00017] FIG. 2 illustrates a flow chart showing provide a method for providing a system for gesture-controlled browser opening, according to some embodiments herein.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[00018] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[00019] As mentioned, there remains a system for gesture-controlled browser opening and a method thereof. Referring now to the drawings, and more particularly to FIGS. 1 through 2, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
[00020] FIG. 1 illustrates a hardware components of a system 100 for gesture-controlled browser opening, according to some embodiments herein. The system 100 includes a camera 102, a memory 104, a processor 106, a controller 108, and a communicator 110. The controlled 108 is configured to process each frame to obtain a RGB images. The controlled 108 is further configured to detect hand gesture from the RGB images by creating landmarks. The controlled 108 is further configured to analysing relationships between each landmarks. The controlled 108 is further configured to recognize gestures based on the analysed relationships between each landmarks. The controlled 108 is further configured to perform an action associated with the recognized gesture.
[00021] In some embodiments herein, the controller 108 is further configured to filter noisy data. The controller 108 is further configured to determine hand orientation using a relative positions and angles between landmarks. The controller 108 is further configured recognise custom gestures by calculating the distance between multiple fingers using the hand landmarks. The action includes open websites, zoom in/out, or control applications.
[00022] In some embodiments herein, the memory 104 includes Gesture-controlled Python programming. The Gesture-controlled Python programming involves Capturing Video by Use OpenCV to access the webcam and process real-time video frames. memory 104 includes Mediapipe's which is pre-trained hand-tracking model to detect hand landmarks. memory 104 includes Gesture Recognition to Analyze spatial relationships (e.g., distances between fingertips) to identify gestures. memory 104 includes Trigger Actions by Linking gestures to actions like opening websites using the web browser module. memory 104 includes GUI Display such as Kivy to create a user interface and display the processed video feed with overlays.
[00023] To start, you need a programming environment with libraries that can detect hands in images or videos. Mediapipe is one such library that comes pre-trained and ready to use. You’ll also need OpenCV to handle the video feed Start the webcam using OpenCV (cv2.VideoCapture(0)). Capture one frame at a time, like taking pictures really fast.
[00024] Before Mediapipe can detect hands, it needs a clear image by Flipping the frame horizontally for a "mirror view" (For example, when a person raises their right hand, an image appears correctly on screen). Convert the frame from BGR (used by OpenCV) to RGB (used by Mediapipe). Now Mediapipe gets to work. It looks at the image and detects hand landmarks. The landmarks are points such as fingertips, knuckles, and wrist.
[00025] Each point has 3 coordinates such as x and y (where it is on the screen), and z (how close it is to the camera). For example, the tip of a person’s thumb might be at (x=0.5, y=0.3, z=-0.02). A negative z value means the point is closer to the camera. The negative z is where gestures are recognized. The program looks at how the landmarks are positioned relative to each other.
[00026] The controller measures the distance between two points (e.g., thumb tip and index tip). If it’s small, you might be making a "pinching" gesture. Use the position of three points (e.g., wrist, index base, and index tip) to calculate angles. The angles help detect whether a finger is bent or extended.
[00027] Depth (z-axis) by tracking how close a hand or finger is to the camera. The depth (z-axis) improves recognition of gestures like "pushing" or "pointing." Everyone’s hands are different. Normalize Landmarks by Converting raw coordinates into a consistent scale. For instance, scale all points relative to the wrist position. Account for Camera Perspective: If the hand moves closer or farther away from the camera, use the z values to adjust calculations.
[00028] Multi-Hand Support: Detect both hands separately and track their gestures independently. Based on the landmark positions: Define gesture rules (e.g., if the thumb and index finger are close, it’s a "click" gesture). Use thresholds (like a maximum distance or angle) to fine-tune recognition. Test gestures multiple times to refine the rules. To make sure the program understands correctly, draw lines connecting the landmarks, like a "skeleton" of the hand.
[00029] Add labels (e.g., "thumb tip") to each point to see what the program is tracking. Filter Noisy Data If the landmarks jitter or jump, apply smoothing techniques like a moving average. Recognize sequences of gestures for more complex interactions. Use hand speed, orientation, or both hands together to refine gesture detection. Try gestures in different lighting conditions and angles.
[00030] Test with hands of varying sizes, skin tones, and accessories. Refine rules or use machine learning models to adaptively learn gestures. By detecting 3D hand landmarks and considering their relationships, you can make gesture recognition much more reliable and versatile.
[00031] In some embodiments herein, the 21 3D hand landmarks:
1. Wrist:
Landmark 0: Base of the hand at the wrist.
2. Thumb:
o Landmark 1: Base of the thumb (carpometacarpal joint).
o Landmark 2: First thumb joint (metacarpophalangeal).
o Landmark 3: Second thumb joint (interphalangeal).
o Landmark 4: Tip of the thumb.
3. Index Finger:
o Landmark 5: Base of the index finger.
o Landmark 6: First index joint.
o Landmark 7: Second index joint.
o Landmark 8: Tip of the index finger.
4. Middle Finger:
o Landmark 9: Base of the middle finger.
o Landmark 10: First middle joint.
o Landmark 11: Second middle joint.
o Landmark 12: Tip of the middle finger.
5. Ring Finger:
o Landmark 13: Base of the ring finger.
o Landmark 14: First ring joint.
o Landmark 15: Second ring joint.
o Landmark 16: Tip of the ring finger.
6. Pinky Finger:
o Landmark 17: Base of the pinky finger.
o Landmark 18: First pinky joint.
o Landmark 19: Second pinky joint.
o Landmark 20: Tip of the pinky finger.
[00032] The landmarks are used to analyze hand movements and gestures in 3D space. Hand orientation is determined using the relative positions and angles between landmarks, including depth (z values). For Example: A "thumbs up" gesture is recognized when the thumb tip (Landmark 4) is extended above the wrist (Landmark 0), while other fingers are curled ROI (Region of Interest): Focuses on the detected hand's bounding box to reduce distractions.
[00033] Temporal smoothing eliminates jitter in landmark detection. Separates the hand from the background using shape and motion, ignoring colors or textures. For Example: In a cluttered room, the system focuses on hand motion and shape to ignore static objects or moving backgrounds Custom gestures can be recognized by calculating the distance between multiple fingers using the hand landmarks. These distances help identify specific actions or gestures.
[00034] For example, the system works by Calculating Distances through Computing the Euclidean distance between key landmarks (e.g., between fingertips like the thumb and index finger). Define distance thresholds for different gestures. For example, if the thumb and index finger are close, it may represent a "pinch" gesture. If the distance between certain fingers is below the threshold, trigger a specific action (like opening a webpage or performing an operation). For Examples: Pinch Gesture: Thumb and Index Finger: When the distance between the thumb tip (Landmark 4) and index tip (Landmark 8) is below a set threshold, it indicates a "pinch.”
[00035] The gesture could be used to trigger a zoom-in action or a click action. thumb_index_distance = calculate_distance(thumb_tip, index_tip)
if thumb_index_distance < 30: # Threshold
Trigger pinch action (e.g., zoom-in)
[00036] Fist Gesture: All Fingers Curled: When the distance between each finger's tip and the base (e.g., wrist or base of palm) is very small, it signifies a "fist." Action: Trigger a "close" or "stop" action. if thumb_index_distance < 20 and middle_ring_distance < 20:
[00037] # Trigger fist action (e.g., stop)
[00038] Open Hand Gesture: Fingers Spread: Large distances between the fingertips of all fingers (thumb to pinky) indicate an "open hand." Action: Can trigger actions like "start" or "open" commands.if thumb_pinky_distance > 100: # Threshold for open hand
# Trigger open hand action (e.g., start)
[00039] The real-time feature of a gesture recognition system allows it to process and respond to hand movements instantly as they occur. The system captures video from a camera (e.g., webcam) and processes each frame as it's received. Using a hand-tracking model (like Mediapipe), the system detects hand landmarks on the fly.
[00040] The system calculates distances and angles between landmarks in real-time, identifying gestures like a pinch, fist, or open hand. Based on the recognized gesture, the system can trigger actions (e.g., open websites, zoom in/out, or control applications) within milliseconds.
[00041] A gesture cooldown system prevents rapid, repeated activation of gestures in a short time frame, which could lead to false triggers or unintended actions. It introduces a delay or "cooldown" period after a gesture is recognized, ensuring that the system only responds to gestures once the cooldown has passed
[00042] Cooldown Timer: After recognizing a gesture, the system starts a timer for a predefined period (e.g., 1-2 seconds). During the period, the system ignores further occurrences of the same gesture, preventing multiple triggers. Once the cooldown period expires, the system is ready to recognize and respond to the gesture again. Without a cooldown, a single gesture (e.g., a "pinch") could be mistakenly detected multiple times in rapid succession.
[00043] Ensures the system doesn't overwhelm users with too many actions when performing quick or subtle gestures. It’s basically works on the URL the only thing to make it precise and accurate is by assigning the link or URL with the any of the finger gesture and make a clear move of finger to open any website precisely
[00044] New gestures are trained by collecting labelled gesture data, preprocessing it, training a machine learning model, and integrating the model into the system for real-time recognition. The process allows the system to recognize new gestures and trigger specific actions.
[00045] FPS stands for Frames Per Second and refers to the number of individual frames (images) displayed or processed by a system in one second. In the context of video or real-time systems like gesture recognition: 30 FPS means the system processes or displays 30 frames per second. Each frame represents a snapshot of the scene captured by the camera. At 30 FPS, motion appears smooth and natural to the human eye. Th 30 FPS is important for accurately tracking fast gestures. The 30 FPS strikes a balance between processing speed and system performance, allowing gesture recognition to happen efficiently without overloading the system.
[00046] Many cameras and video applications default to 30 FPS, making it a common choice. Based on the provided code, the UI for hand gesture-triggered web browser actions would display real-time feedback, gesture recognition, and action status. Below are examples of possible UI designs for the system displays the live camera feed with hand landmarks drawn in real-time.
[00047] Shows gesture feedback (e.g., "Gesture Detected: Pinch") and action
UI Layout:
markdown
[Live Camera Feed with Hand Landmarks]
--------------------------------------------------
Gesture Detected: "Thumb-Index Pinch"
Action: Opening https://www.youtube.com
--------------------------------------------------
How It Works:
• Hand landmarks (e.g., fingertips) are visually drawn on the video feed using Mediapipe.
• Detected gestures are labelled dynamically on-screen, along with the corresponding
action.
2. Gesture Confirmation with Action Log
Description:
• Logs all detected gestures and their corresponding actions in a side panel or footer.
UI Layout:
sql
[Live Camera Feed with Hand Tracking]
--------------------------------------------------
Action Log:
✓ Thumb-Index Pinch → Opened YouTube
✓ Thumb-Middle Pinch → Opened Gmail
How It Works:
• A running list of gestures and triggered actions appears in real-time.
• The running list provides users with visual confirmation of gestures and websites opened.
3. Minimalist UI with Countdown Timer
Description:
• Displays only essential feedback with a countdown timer before opening the website.
UI Layout:
csharp
[Live Camera Feed with Hand Tracking]
Gesture Detected: Thumb-Index Pinch
Opening YouTube in 3... 2... 1...
How It Works:
• After detecting a gesture, a timer appears, giving the user time to cancel or adjust them
gesture.
• Upon completion, the corresponding website opens.
4. Side Menu with Gesture Instructions
Description:
• Includes a menu showing available gestures and their associated actions.
• Highlights the detected gesture in real-time.
UI Layout:
less
[Live Camera Feed]
Available Gestures:
[✓] Thumb-Index Pinch → Open YouTube
[ ] Thumb-Middle Pinch → Open Gmail
[ ] Thumb-Ring Pinch → Open LinkedIn
[ ] Ring-Middle Pinch → Open ChatGPT
[ ] Middle-Index Pinch → Open Google
How It Works:
• As a gesture is detected, it is highlighted in the list, providing immediate visual
confirmation.
5. Gesture-Triggered Popup Notification
Description:
• Displays a popup notification when a gesture is detected and an action is triggered.
UI Example:
csharp
[Live Camera Feed]
[Popup] Gesture Detected: Thumb-Index Pinch
Action: Opening YouTube.com
How It Works:
• A small popup appears on top of the camera feed to confirm the detected gesture and
action.
Implementation Notes:
• Live Feed: Use OpenCV to render real-time video and overlay hand landmarks using
Mediapipe.
• Gesture Feedback: Use Kivy labels or popups to dynamically display detected gestures
and actions.
• Action Log: Maintain a list of detected gestures in a Kivy layout (e.g., BoxLayout or
GridLayout).
• Styling: Enhance UI appearance with Kivy’s customization options (e.g., fonts, colors).
8. Please provide step by step flow chart explanation of the invention?
9. Please provide information on the technical features of the invention?
Real-Time Hand Gesture Recognition:
• Uses Mediapipe's Hand Tracking module to detect and track 21 hand landmarks per
hand in real-time.
• Supports up to two hands simultaneously.
Gesture Detection Logic:
• Measures the Euclidean distance between specific finger landmarks (e.g., thumb and
index fingertips) to identify predefined gestures.
• Example: Thumb-Index pinch triggers YouTube, while Thumb-Middle pinch triggers
Gmail.
High Frame Rate Processing:
• Operates at 30 Frames Per Second (FPS), providing smooth and responsive gesture
tracking.
• Ensures minimal latency for real-time interactions.
Web Browser Automation:
• Integrates Python's web browser module to open specific websites when gestures are
detected.
• Example: Opening LinkedIn, Gmail, or Google automatically based on hand gestures.
Camera Integration:
• Uses OpenCV for video capture and frame processing.
• Includes real-time mirroring (flipping frames horizontally) for intuitive user interaction.
Hand Landmark Visualization:
• Draws landmarks and connections on the live camera feed using Mediapipe’s drawing
utilities.
• Provides visual feedback for detected hand positions and gestures.
Cross-Platform GUI with Kivy:
• Uses the Kivy framework to create a user interface for displaying the live video feed.
• Ensures compatibility across operating systems (Windows, macOS, Linux).
Custom Gesture Definition:
• Gestures are defined based on distance thresholds between specific landmarks.
• Custom gestures can be added by defining new distance-based conditions in the code.
Ease of Resource Management:
• Implements resource clean up by releasing the camera and Mediapipe resources
(self.capture.release() and hands.close()) when the application is stopped.
Individuals who frequently access specific websites can open them quickly
using simple hand gestures.
o Example: A user can open YouTube, Gmail, LinkedIn, or Google instantly without
typing or clicking.
Enables hands-free browsing during presentations or teaching sessions.
o Example: Educators can use gestures to open resources (like Google or
YouTube) during a live lecture.
Programmers working on gesture recognition can use hands-free browsing as a foundation to build advanced gesture-based applications.
o Example: Adding new gestures to trigger other apps or expanding the system for
multi-hand gestures.
Beneficial for people with mobility challenges to navigate websites without
traditional input devices.
o Example: A person with limited hand dexterity can use large and simple gestures
to browse.
Designers working on interactive or touchless UI systems can use the code as a
prototype for integrating gesture control.
o Example: Designers can incorporate gestures into kiosks or AR systems to
trigger specific actions
The system can be extended to control smart devices through gestures.
o Example: Triggering a browser to load smart home dashboards with a gesture.
Combining Mediapipe, Kivy, and OpenCV to design a hand-gesture-based browser
automation system is not trivial because it requires expertise in:
• Real-time video processing.
• Gesture recognition algorithms.
• GUI development.
• Framework interoperability and optimization.
[00048] The main technical challenge lies in ensuring real-time performance and gesture accuracy while seamlessly integrating the frameworks and managing system resources. Overcoming these challenges results in a highly interactive and innovative application.
[00049] FIG. 2 illustrates a flow chart showing provide a method 200 for providing a system for gesture-controlled browser opening, according to some embodiments herein. At step 202, the method 200 includes processing, by the system each frame to obtain a RGB images. At step 204, the method 200 includes detecting, by the system, hand gesture from the RGB images by creating landmarks. At step 206, the method 200 includes analysing, by the system, relationships between each landmarks. At step 208, the method 200 includes recognizing, by the system, gestures based on the analysed relationships between each landmark. At step 210, the method 200 includes performing, by the system an action associated with the recognized gesture.
[00050] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practised with modification within the scope of the appended claims.
, Claims:We claim:
1. A system for gesture-controlled browser opening, comprising:
a camera (102) that is configured to capture real time video frames;
a memory (104);
at least one processor (106); and
a controller (108) connected to the memory (104) and the at least one processor (106) is configured to:
process each frame to obtain a RGB images;
detect hand gesture from the RGB images by creating landmarks;
analysing relationships between each landmarks;
recognize gestures based on the analysed relationships between each landmarks; and
perform an action associated with the recognized gesture.
2. The system (100) as claimed in claim 1, wherein the controller is further configured to filter noisy data.
3. The system (100) as claimed in claim 1, wherein the controller is further configured to determine hand orientation using a relative positions and angles between landmarks.
4. The system (100) as claimed in claim 1, wherein the controller is further configured recognise custom gestures by calculating the distance between multiple fingers using the hand landmarks.
5. The system (100) as claimed in claim 1, wherein the action comprising open websites, zoom in/out, or control applications.
6. A method (200) for providing a system for gesture-controlled browser opening, comprising:
processing (202), by the system (100), each frame to obtain a RGB images;
detecting (204), by the system (100), hand gesture from the RGB images by creating landmarks;
analysing (206), by the system (100), relationships between each landmarks;
recognizing (208), by the system (100), gestures based on the analysed relationships between each landmarks; and
performing (210), by the system (100) an action associated with the recognized gesture.
7. The method (200) as claimed in claim 6, wherein the controller (108) is further configured to filter noisy data.
8. The method (200) as claimed in claim 6, wherein the controller (108) is further configured to determine hand orientation using a relative positions and angles between landmarks.
9. The method (200) as claimed in claim 6, wherein the controller (108) is further configured recognise custom gestures by calculating the distance between multiple fingers using the hand landmarks.
10. The method (200) as claimed in claim 6, wherein the action comprising open websites, zoom in/out, or control applications.
| # | Name | Date |
|---|---|---|
| 1 | 202521014026-STATEMENT OF UNDERTAKING (FORM 3) [18-02-2025(online)].pdf | 2025-02-18 |
| 2 | 202521014026-REQUEST FOR EARLY PUBLICATION(FORM-9) [18-02-2025(online)].pdf | 2025-02-18 |
| 3 | 202521014026-POWER OF AUTHORITY [18-02-2025(online)].pdf | 2025-02-18 |
| 4 | 202521014026-MSME CERTIFICATE [18-02-2025(online)].pdf | 2025-02-18 |
| 5 | 202521014026-FORM28 [18-02-2025(online)].pdf | 2025-02-18 |
| 6 | 202521014026-FORM-9 [18-02-2025(online)].pdf | 2025-02-18 |
| 7 | 202521014026-FORM FOR SMALL ENTITY(FORM-28) [18-02-2025(online)].pdf | 2025-02-18 |
| 8 | 202521014026-FORM FOR SMALL ENTITY [18-02-2025(online)].pdf | 2025-02-18 |
| 9 | 202521014026-FORM 18A [18-02-2025(online)].pdf | 2025-02-18 |
| 10 | 202521014026-FORM 1 [18-02-2025(online)].pdf | 2025-02-18 |
| 11 | 202521014026-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [18-02-2025(online)].pdf | 2025-02-18 |
| 12 | 202521014026-EVIDENCE FOR REGISTRATION UNDER SSI [18-02-2025(online)].pdf | 2025-02-18 |
| 13 | 202521014026-DRAWINGS [18-02-2025(online)].pdf | 2025-02-18 |
| 14 | 202521014026-COMPLETE SPECIFICATION [18-02-2025(online)].pdf | 2025-02-18 |
| 15 | Abstract.jpg | 2025-02-27 |
| 16 | 202521014026-FORM-8 [28-05-2025(online)].pdf | 2025-05-28 |
| 17 | 202521014026-FER.pdf | 2025-06-19 |
| 18 | 202521014026-Retyped Pages under Rule 14(1) [07-08-2025(online)].pdf | 2025-08-07 |
| 19 | 202521014026-FER_SER_REPLY [07-08-2025(online)].pdf | 2025-08-07 |
| 20 | 202521014026-CORRESPONDENCE [07-08-2025(online)].pdf | 2025-08-07 |
| 21 | 202521014026-CLAIMS [07-08-2025(online)].pdf | 2025-08-07 |
| 22 | 202521014026-2. Marked Copy under Rule 14(2) [07-08-2025(online)].pdf | 2025-08-07 |
| 1 | 202521014026_SearchStrategyNew_E_SearchHistoryE_12-06-2025.pdf |
| 2 | 202521014026_SearchStrategyAmended_E_SearchHistoryAE_20-11-2025.pdf |