Abstract: A directional audible signalling system for vehicles, comprising a monitoring arrangement mounted on external surface of a vehicle, comprises a rear sensing unit 101 detects both living and non-living elements and measure their distance from vehicle and a front sensing unit 104 captures images of frontal elements and measures noise levels in vicinity, a detection module processes sensor data from monitoring arrangement to determine type of element and its distance from vehicle, an alert unit 107 generates audible alerts that uniquely indicate type and distance of detected element, a horn direction control unit 201 steers direction of sound according to direction of detected elements, a GPS unit determines instant location of vehicle in real-time, for allowing control of horn magnitude, a magnitude control module calculates and set an appropriate horn volume, a sound control module refines and controls amplitude of vehicle’s horn.
Description:FIELD OF THE INVENTION
[0001] The present invention relates to an automotive safety systems and more particularly to a directional audible signalling system for vehicles that enhances situational awareness for a user by detecting obstructions around the vehicle and responding with directional, context-sensitive horn alerts based on the type and position of nearby elements.
BACKGROUND OF THE INVENTION
[0002] Driving a vehicle in crowded or noise-polluted areas often presents significant challenges in identifying and reacting to nearby obstructions. In day-to-day scenarios, drivers frequently encounter pedestrians, animals, and other vehicles appearing unexpectedly near blind spots or rear areas of the vehicle. Such situations require timely alerting mechanisms to prevent accidents. However, traditional vehicle horns lack the capability to indicate the direction, type, or urgency of an obstruction, often leading to confusion or inadequate responses from surrounding entities. This becomes particularly critical in urban and residential areas where unnecessary honking contributes to noise pollution and fails to serve its intended purpose effectively.
[0003] In many real-world driving conditions, especially during reversing or navigating tight corners, the driver's view is partially obstructed. Although the instinctive action is to use an audible signal to alert nearby individuals, the limitation of conventional alert mechanisms is that they function in a uniform manner, broadcasting the same sound regardless of situation or severity. Moreover, such systems do not consider external conditions like time of day or ambient noise levels, and thereby frequently cause unnecessary disturbances, especially in residential or hospital zones where strict noise regulations are enforced. This lack of context-awareness makes traditional systems ineffective and sometimes counterproductive.
[0004] WO2000012354A1 discloses a method and system with a device for signalling a vehicle's presence or movement. The invention preferably includes a sounding or signalling device that is attached or associated with an electric power vehicle. The invention includes the steps of providing signal or sound generating means associated with said vehicle. Furthermore, the signal or sound generated frequency or loudness may be increased and decreased as the vehicle's direction and/or road speed or mechanical speed changes. The signal or sound may be generated by electronic, electric, mechanical, airflow or movement means, or by means as appropriate in signalling the vehicle's presence and movement to other vehicle users and pedestrians.
[0005] US7106180B1 discloses an acoustic warning or alerting system for directing an audible warning signal to at least one intended recipient, while reducing the chance that the warning signal will be heard by others within the proximity of the system. The system includes a modulator for modulating an ultrasonic carrier signal with a processed audio signal, a driver amplifier for amplifying the modulated carrier signal, and a parametric array of acoustic transducers for projecting the modulated and amplified carrier signal through a propagation medium for subsequent regeneration of the audio signal along a pre-selected projection path. The parametric array of audio transducers operates by employing the nonlinear interaction between high frequency sound components and the propagation medium to generate at least one highly directional beam of lower frequency sounds within the propagation medium. The directional acoustic alerting system may be employed as a replacement for conventional alerting systems such as horns, whistles, and bells to assure that primarily only those people and/or animals intended to hear the warning signal actually hear the sound.
[0006] Conventionally, many systems have been implemented in modern vehicles to improve environmental awareness, yet most are limited to informing the driver and not the surrounding individuals. More importantly, they lack to differentiate between types of obstacles or determine the optimal way to convey urgency to those nearby and not allow for modulation of audio signals based on direction, relevance, or distance, which diminishes the impact of truly urgent alerts and contributes to unnecessary noise pollution in already congested soundscapes.
[0007] In order to overcome the aforementioned drawbacks, there exists a need in the art to develop a system that requires to be capable actively sense and interpret the driving environment. Such a system should be capable of analyzing the proximity, nature, and position of nearby elements and respond by generating contextually appropriate alerts. These alerts should be directional, modulated in tone and volume, and responsive to both user preferences and external factors such as geographic location and time of day.
OBJECTS OF THE INVENTION
[0008] The principal object of the present invention is to overcome the disadvantages of the prior art.
[0009] An object of the present invention is to develop a system that enables real-time identification of obstacles or living beings near the vehicle, thereby significantly reducing the risk of collisions, especially in blind spots or during reversing maneuvers.
[0010] Another object of the present invention is to develop a system that allows the user to quickly understand the nature of the potential hazard without visual distraction by producing customized audible tones based on the type and distance of nearby elements.
[0011] Another object of the present invention is to develop a system that directs the horn sound only toward the location of the obstacle, thereby reducing unnecessary noise in the environment and avoiding disturbance to unrelated bystanders.
[0012] Another object of the present invention is to develop a system that automatically adjusts horn volume based on geographic area and time of day (e.g., near hospitals or during night hours), supporting compliance with local noise regulations and improving community friendliness.
[0013] Another object of the present invention is to develop a system that factors in existing environmental noise levels when adjusting the horn’s volume, ensuring the alert is neither too loud nor too soft for the situation, which enhances its effectiveness and reduces auditory overload.
[0014] Yet another object of the present invention is to develop a system that allows the user to configure alert tones and volumes according to their personal preferences, allowing for a more comfortable and intuitive driving experience, especially for individuals with specific auditory sensitivities or preferences.
[0015] The foregoing and other objects, features, and advantages of the present invention will become readily apparent upon further review of the following detailed description of the preferred embodiment as illustrated in the accompanying drawings.
SUMMARY OF THE INVENTION
[0016] The present invention relates to a directional audible signalling system for vehicles integrated with environmental monitoring, which allows a vehicle to dynamically control the tone, volume, and direction of horn sounds in real-time, thereby improving alert efficiency while minimizing unnecessary noise pollution in sensitive locations.
[0017] According to an embodiment of the present invention, a directional audible signalling system for vehicles that identifies and alerts the user to nearby obstructions, a monitoring arrangement mounted on external surface of a vehicle, comprises a rear sensing unit installed on the vehicle’s rear surface, equipped with a thermal sensor and an ultrasonic sensor to detect both living and non-living elements and measure their distance from the vehicle, a front sensing unit mounted on the front surface of the vehicle, comprising a camera to capture images of frontal elements and a microphone-based sound sensor to measure noise levels in the vicinity, a detection module receives and processes sensor data from the monitoring arrangement to determine the type of element (e.g., living, non-living) and its distance from the vehicle, an alert unit, implemented as an audio transducer and installed inside the vehicle, generates audible alerts that uniquely indicate the type and distance of the detected element using tones and intensities.
[0018] In another embodiment of the present invention, the alert unit is enhanced with a user profile module, configured with a control unit that supports personalized user profiles, each allowing the user to select their preferred tone and intensity settings, a horn direction control unit is mounted over the vehicle's horn to steer the direction of the sound according to the direction of the detected elements, the horn direction control unit includes an enclosure, first and second motorized rollers are installed on each lateral side of the enclosure, each containing a spool of acoustic foam, which is selectively deployed to redirect horn sound toward a specific direction, a location-based database is connected to the control unit, containing data on suitable horn magnitudes based on time and location to comply with regulations or noise sensitivity zones, a GPS unit determines the instant location of the vehicle in real-time, allowing adaptive control of horn magnitude depending on where the vehicle is operating, a magnitude control module, configured with the control unit, receives data on vehicle location and time of day to calculate and set an appropriate horn volume, ensuring regulatory and contextual compliance, a sound control module, operatively connected to the vehicle's control hub, utilizes input from the front sensing unit to further refine and control the amplitude of the vehicle’s horn based on detected environmental noise.
[0019] According to another embodiment of the present invention, the system further includes a method for uniquely identifying an obstruction, involving the following steps: detecting the presence of an obstacle, determining its type (e.g., human, object), sensing its direction, calculating the distance from the vehicle, selecting a pre-defined audio tone corresponding to the obstacle type, playing the tone at a volume based on distance, and directionally limiting the horn’s sound toward the obstruction to provide targeted auditory alerts.
[0020] While the invention has been described and shown with particular reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0021] These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
Figure 1 exemplarily illustrates a perspective view of a directional audible signalling system for vehicles.
Figure 2 exemplarily illustrates an isometric view of a horn directional control unit associated with the system;
Figure 3 exemplarily illustrates a schematic diagram of the system;
Figure 4 exemplarily illustrates a flowchart depicting workflow of the system.
DETAILED DESCRIPTION OF THE INVENTION
[0022] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention as defined in the claims.
[0023] In any embodiment described herein, the open-ended terms "comprising," "comprises,” and the like (which are synonymous with "including," "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of," consists essentially of," and the like or the respective closed phrases "consisting of," "consists of, the like.
[0024] As used herein, the singular forms “a,” “an,” and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0025] The present invention relates to a directional audible signalling system for vehicles that automatically identifies living or non-living obstructions, determines their direction and proximity, and emits a uniquely tailored audio tone toward the obstruction, while also adjusting volume based on location-specific noise regulations for assisting the user in alerting surrounding individuals or objects.
[0026] Figure 1 exemplarily illustrates a perspective view of a directional audible signalling system for vehicles is illustrated, comprising a rear sensing unit 101 installed over a rearwards surface of a vehicle, the rear sensing unit 101 comprises a thermal sensor 102 and an ultrasonic sensor 103, a front sensing unit 104 mounted over a front surface of the vehicle, the front sensing unit 104 comprises a camera 105, and a sound sensor 106, an alert unit 107 installed inside the vehicle.
[0027] The present invention pertains to a directional audible signalling system configured for integration into a vehicle, particularly for enhancing the safety, awareness, and acoustic discipline of vehicles operating in dynamic environments. The system enables real-time detection, classification, and response to nearby entities, both animate (e.g., humans, animals) and inanimate (e.g., barriers, vehicles, poles), using a coordinated sensing and signalling approaches. The core utility lies in its ability to modulate audible output not only in intensity (decibel level) but also in directionality, which allows targeted communication of risk to a specific zone around the vehicle, minimizing indiscriminate noise.
[0028] This provides a dual benefit: (I) improving user alertness to potential hazards through differentiated in-cabin alerts, and (ii) projecting horn output selectively in the direction of perceived obstruction, thereby aligning with modern goals of urban noise regulation and intelligent traffic signalling.
[0029] The system includes a monitoring arrangement mounted over an external surface of a vehicle, functioning as the primary sensory interface for environmental detection, responsible for acquiring real-time environmental data from both front and rear regions of the vehicle.
[0030] In an embodiment of the present invention, the directional audible signalling system may be integrated into standard vehicle such as two wheeler vehicle, three-wheeler vehicle and four-wheeler vehicle. These vehicles operate frequently in urban and suburban environments where dense traffic, pedestrian movement, and noise regulation zones are common. The monitoring arrangement can assist users in preventing back-over incidents and alerting selectively during forward movement, particularly in parking lots, schools, hospitals, and residential neighbourhoods.
[0031] The monitoring arrangement comprises two distinct but functionally coordinated units namely, a rear sensing unit 101 and a front sensing unit 104, each integrated with dedicated sensors suited for multi-modal detection. The rear sensing unit 101, affixed to the rearward-facing surface of the vehicle.
[0032] In an embodiment of the present invention, the rear sensing unit 101 may have affixed to a rear bumper of the vehicle.
[0033] In another embodiment of the present invention, the rear sensing unit 101 may have affixed to a tailgate of the vehicle.
[0034] The rear sensing unit 101 comprises a thermal sensor 102 and an ultrasonic sensor 103, each operating through separate physical principles yet working in tandem to perform classification and ranging of objects near the vehicle's rear.
[0035] The thermal sensor 102 functions by detecting infrared radiation emitted by nearby elements (e.g., humans, animals, barriers, vehicles and poles). In an embodiment of the present invention, the thermal sensor 102 includes a micro bolometer that responds to changes in incident IR radiation. These arrays convert thermal energy into electrical signals that are processed by an on-board analog-to-digital converter (ADC) and sent to a control unit associated with the system, where temperature gradients are mapped. Living beings, having higher and distinct thermal signatures, are distinguishable from inanimate objects.
[0036] In an embodiment of the present invention, a millimeter-wave (mmWave) Radar Sensor might be employed to detect presence, motion, and some thermal characteristics (e.g., respiration, human body movement).
[0037] In another embodiment of the present invention, the Passive Infrared (PIR) Sensor might be employed to detect movement of warm objects (like human bodies) through changes in infrared radiation levels.
[0038] In yet another embodiment of the present invention, a standard camera with an AI model (e.g., YOLO, MobileNet) might be employed to identify human, animal, and object classes from image data.
[0039] On the other hand, the ultrasonic sensor 103, also mounted in the rear sensing unit 101. In a preferred embodiment of the present invention, the ultrasonic sensor 103 consists of a piezoelectric transducer that emits ultrasonic pulse. The time interval between pulse emission and echo reception is used to compute distance. The ultrasonic sensor 103 includes a transmitter and receiver pair, signal conditioning circuits (e.g., amplifiers, filters), and a timing module.
[0040] In another embodiment of the present invention, multiple embodiments include single-point ultrasonic modules for basic obstacle detection, or phased ultrasonic arrays that allow limited directional mapping.
[0041] In another embodiment of the present invention, more versions may utilize pulse-coded modulation (PCM) to filter cross-interference when multiple sensors are employed across the vehicle's rear fascia.
[0042] In another embodiment of the present invention, the mmWave radar sensor to measure Doppler shift and phase delay to compute distance and velocity of objects with high accuracy.
[0043] In another embodiment of the present invention, a LiDAR Sensor (Light Detection and Ranging might be employed to emit laser pulses and calculates the time-of-flight of each reflected beam to create high-resolution 3D point clouds.
[0044] In yet another embodiment of the present invention, a stereo camera (Depth camera) might be employed, where a pair of cameras spaced apart capture stereo images, and protocol calculates depth through triangulation.
[0045] On the other hand, the front sensing unit 104, positioned over a frontal region of the vehicle such as the front bumper, hood, grille, or windshield area, incorporates a camera 105 and a sound sensor 106 (specifically a microphone) for visual and auditory environmental awareness.
[0046] The camera 105 is configured to continuously capture forward-facing image frames that are sent to the control unit. In a preferred embodiment of the present invention, the camera 105 is typically an AI-based camera 105, for capturing multiple images from different angles and perspectives and providing comprehensive coverage of the front of the vehicle. The camera 105 captures multiple images in front of the vehicle from various angles simultaneously. Before analysis, the captured image goes through pre-processing steps to enhance image quality which includes adjusting brightness and contrast and removing any distortions.
[0047] The processed images are then sent to a processor linked with the camera 105. The processor processes the captured images by means of an artificial intelligence protocol encrypted within the control unit for detecting theelements in front of the vehicle. The control unit uses artificial intelligence protocol like Convolution Neural Network (CNN) for detecting distinctive patterns or characteristics in the image like human shapes, vehicle contours, lane markings, road signs, animal outlines, traffic cones, obstacles, or any anomalous moving object. Once potential features are detected, the control unit localizes them by identifying their positions within the image. This involves finding their coordinates or regions of interest where these features are located.
[0048] The CNN architecture performs this by applying a series of convolutional, pooling, and activation layers to extract hierarchical spatial features, followed by bounding box regression or pixel-wise segmentation depending on the task. The localized regions are then passed through classification layers or additional decision-making protocols to determine the type of object, its relative size, movement vector (if any), and proximity to the vehicle.
[0049] In an embodiment of the present invention, the camera 105 may be a monocular RGB camera, a stereo vision camera, or a depth camera using structured light or time-of-flight methods for three-dimensional mapping.
[0050] The sound sensor 106, implemented as a microphone, is designed to detect the magnitude of ambient acoustic signals surrounding the front of the vehicle. Internally, the microphone plays a crucial role by converting spoken words or commands into electrical signals which are then processed and analyzed to trigger specific actions. The surrounding sound creating sound waves, which travel through the air as variations in air pressure.
[0051] The microphone mentioned herein is a transducer that converts these variations in air into electric signals. The analog electrical signal is converted into digital form which is done by an analog-to-digital converter (ADC). The digital signal is then subjected to various signal processing techniques to enhance voice quality and eliminate noise to detect the magnitude of ambient acoustic signals surrounding the front of the vehicle.
[0052] In an embodiment of the present invention, the microphone may be part of a microphone array enabling directional audio capture or beamforming, or may include a calibrated SPL meter for measuring exact sound pressure levels in decibels (dB). The audio data is processed to determine if the environment is noisy or quiet, which aids the system in deciding whether a horn signal needs to be emitted, and if so, how loud it should be.
[0053] In another embodiment of the present invention, the sound sensor 106 may include an electret condenser microphone consists of a diaphragm and a permanently charged electret material placed near a back plate. Changes in air pressure (sound waves) cause the diaphragm to move, altering capacitance and producing an analog voltage signal.
[0054] In another embodiment of the present invention, the sound sensor 106 may include MEMS microphones, which are miniaturized sound sensors built using microfabrication technology. They typically include a capacitive diaphragm on a silicon substrate, integrated with on-chip amplification and analog or digital output stages.
[0055] In another embodiment of the present invention, the sound sensor 106 may be a piezoelectric material that generates a voltage when subjected to acoustic pressure, often used in rugged or waterproof applications.
[0056] In yet another embodiment of the present invention, the sound sensor 106 integrated with embedded AI capabilities that classify types of sounds (e.g., honking, sirens, speech) using pre-trained neural networks or sound pattern libraries.
[0057] A machine learning-based detection module, executable by a processor from an integrated memory is configured for receiving and processing the sensed data acquired by the monitoring arrangement, in order to accurately determine both the type of element (living or non-living) and its distance from the vehicle. (as illustrated in fig 4)
[0058] In an embodiment of the present invention, the detection module comprises a data interface unit, a signal conditioning and conversion circuit, a classification engine, and a distance computation engine. The data interface unit is responsible for establishing data communication protocols (e.g., UART, I²C, SPI, or CAN) to receive synchronized input from the rear sensing unit 101 and front sensing unit 104 of the monitoring arrangement, thereby ensuring accurate time-stamping and channel separation of thermal 102, ultrasonic 103, camera 105, and sound sensor 106 data.
[0059] The signal conditioning and conversion circuit is implemented using analog front-end components and analog-to-digital converters (ADC) where required. For instance, in the case of analog ultrasonic signals or thermal sensor 102 voltage outputs, the module first normalizes and digitizes the inputs to convert them into a usable format for subsequent digital analysis.
[0060] In an embodiment of the present invention, once input digitized, the data is passed to the classification engine, which applies feature extraction protocols to interpret sensor patterns and recognize the type of element detected.
[0061] In another embodiment of the present invention, the classification engine applies lightweight machine learning models to interpret sensor patterns and recognize the type of element detected.
[0062] For example:
• Thermal sensor 102 data is analyzed using thermal gradient thresholds and blob detection to identify living beings based on heat signatures.
• Camera 105 images from the front sensing unit 104 are optionally processed through AI-based models such as convolutional neural networks (CNNs) or Haar cascade classifiers to recognize common vehicle-side elements like pedestrians, animals, traffic barriers, or vehicles.
• Sound sensor 106 data is interpreted to identify auditory cues (e.g., approaching vehicle horns or emergency sirens) and categorize them accordingly.
[0063] In parallel, the distance computation engine determines the proximity of the detected elements using distance-estimation principles appropriate to each sensor type. For example:
• Ultrasonic sensor’s data is processed using time-of-flight calculations.
• Camera 105 data is processed using disparity mapping or monocular depth cues to estimate range.
[0064] In another embodiment of the present invention, the system may employ FPGA-Based Detection Engine Suitable for high-speed parallel processing of multiple sensor inputs.
[0065] In another embodiment of the present invention, the system may employ modular architecture with swappable sensor interfaces, which allows the detection module to adapt dynamically to different sensor sets.
[0066] In yet another embodiment of the present invention, the system may employ Cloud-Connected Detection via Edge Gateway. In a connected vehicle, the detection module also offloads complex classification tasks to a cloud AI engine via secure communication protocols (e.g., MQTT or HTTPS), especially when handling large camera datasets.
[0067] A sound control module is operatively connected to the vehicle’s central control hub, functioning as a key component responsible for dynamically adjusting the amplitude (i.e., loudness) of the horn based on real-time environmental inputs. The sound control module primarily relies on data collected by the front sensing unit 104, which includes the camera 105 and the sound sensor 106 (specifically, a microphone). The purpose of the sound control module is to customize the horn's output volume based on both visual recognitions of surrounding elements and ambient noise levels in the vehicle’s vicinity.
[0068] The sound control module receives captured environmental data from the front sensing unit 104. The microphone in the front sensing unit 104 measures the magnitude of ambient noise, while the camera 105 collects visual information about obstacles or traffic conditions. This combined data is routed through the control hub of the vehicle and analyzed by the sound control module. In an embodiment of the present invention, the module may also receive auxiliary information such as vehicle speed, current operational mode (urban, rural, school zone, etc.), and user preferences from the control hub.
[0069] An alert unit 107, is a critical output component of the system, strategically installed inside the vehicle cabin, and configured to generate audible alerts that inform the user of detected objects in the vehicle’s surroundings. The primary function of this alert unit 107 is to convey situational awareness by delivering distinct sound patterns that correspond to both the type of element detected (e.g., pedestrian, animal, vehicle, static object) and its distance from the vehicle. The alerts are produced in real-time to enable immediate user response.
[0070] In a practical use case, consider a vehicle equipped with the system operating in a residential neighborhood. As the driver attempts to reverse the vehicle, the rear sensing unit 101 detects the presence of a small child directly behind the vehicle at a distance of approximately 1 meter. This information is relayed to the detection module, which analyzes the input data to identify the detected entity as a living element and calculate its precise distance from the rear of the vehicle.
[0071] Based on this analysis, the alert unit 107 is triggered to generate an audible alert inside the vehicle. Given that the detected object is a living being at a close and potentially hazardous distance, the system selects a high-pitched tone with high intensity (loud volume). This alert is immediately recognizable by the driver as a warning of an urgent and high-risk situation, prompting them to halt the vehicle without delay.
[0072] In contrast, consider another scenario where the vehicle is moving forward in a parking lot. The front sensing unit 104, equipped with a camera 105 and sound sensor 106, identifies a stationary object such as a parked bicycle approximately 5 meters ahead of the vehicle. The detection module classifies the object as non-living and determines that it poses no immediate threat due to its distance. In response, the alert unit 107 generates a soft, low-pitched tone at a moderate volume, indicating to the driver that there is a non-living obstacle ahead at a safe distance.
[0073] In a preferred embodiment of the present invention, the alert unit 107 comprises an audio transducer. The audio transducer converts electrical energy into sound energy to produce audible alerts for the user. The transducer receives electrical signals from the control unit based on the data interpreted from the monitoring arrangement and detection module. These signals represent specific instructions about the tone (frequency) and intensity (amplitude) corresponding to the type and distance of the detected element.
[0074] This audio transducer is electrically driven by a signal generator controlled by the control unit. In an embodiment of the present invention, the audio transducer is driven by the pulse width modulation (PWM) circuit. The transducer converts the electrical signals into mechanical vibrations, thereby producing audible sound waves at predefined frequencies and amplitudes.
[0075] For example, if a human or an animal is detected behind the vehicle, the audio transducer produces a specific type of sound, which may vary in tone or pattern to distinguish it from other alerts.
• Initially, when the detected object is at a greater distance from the vehicle, the audio transducer emits a normal or low-intensity sound alert to notify the user of the presence of someone or something behind the vehicle.
• As the object moves closer, approaching the threshold or safe zone, the intensity of the sound increases significantly. This escalation in sound intensity serves as an urgent warning, prompting the user to stop the vehicle immediately to prevent any collision or accident.
• In case non-living objects, such as poles, manholes, or other stationary obstacles that may potentially cause damage to the vehicle upon collision, are detected, the audio transducer emits different types of sounds with varying intensities.
[0076] In an embodiment of the present invention, the audio transducer may be implemented as a piezoelectric buzzer. In another embodiment of the present invention, the audio transducer is implemented as an electromagnetic speaker.
[0077] A user profile module is integrated with the control unit to enable personalized configurations for individual users by allowing the creation, storage, and management of unique user profiles. Each profile contains user-defined preferences related to the tone types, sound frequencies, volume levels, and modulation patterns that the alert unit 107 generates in response to environmental detections.
[0078] Each user profile includes a custom mapping between specific obstacle types (e.g., pedestrian, animal, static barrier, cyclist) and their corresponding alert tones and intensities. For instance, one user may prefer a high-pitched, short beep for pedestrian alerts and a lower, longer tone for non-living obstacles. Another user may opt for a musical chime instead of a traditional beep. The module ensures these preferences are implemented in real-time by routing the selected tone and intensity values to the alert unit 107.
[0079] In an embodiment of the present invention, the user profile module may include machine learning protocols that track user behaviour over time such as how quickly a user responds to certain tones and automatically adjust the alert configurations for optimal performance. It may also support remote updates through a connected mobile application or web interface, allowing users to configure or modify their alert settings without direct physical interaction with the vehicle’s on-board system.
[0080] Figure 2(a), 2(b), exemplarily illustrates an isometric view of a horn direction control unit 201 associated with the system, comprising an enclosure 202 installed over the horn of the vehicle, a first motorized roller 203 on a first inward side of the enclosure 202 and a second motorized roller 204 on a second inward side of the enclosure 202, acoustic foam 205 rolled around each of the first and second motorized roller 203, 204.
[0081] The system includes a horn direction control unit 201, which includes a specially designed enclosure 202 that is mounted directly over the horn of the vehicle. This enclosure 202 acts like a casing or shield, and its primary function is to shape and redirect the horn’s sound rather than letting it scatter in all directions. Without this enclosure 202, the horn sound typically spreads widely, which may not be ideal when the vehicle wants to alert someone in a particular direction like a pedestrian behind or beside the car.
[0082] On both left and right sides of the enclosure 202, there are a first and second motorized rollers 203, 204, (like small rotating rods). These first and second rollers 203, 204 are designed to hold and control the release of a special material called acoustic foam 205, which is wrapped around them like a roll of tape. Each first and second roller 203, 204 contains a spool (a wound-up roll) of this foam 205, and it is built to extend or retract the foam 205 as needed.
[0083] The acoustic foam 205 is a material that absorbs and redirects sound waves. When deployed (i.e., rolled out), this foam 205 blocks or dampens the horn sound from going in certain directions, while allowing it to focus more strongly in the open direction. For example, if the system detects a person only on the left side of the vehicle, the foam 205 on the right side can be extended to block the horn sound from traveling that way, while the left side remains open. This results in the horn being heard more clearly in the required direction, and less noise pollution elsewhere.
[0084] In an embodiment of the present invention, the directional control of the horn is achieved by enclosing the horn within a conical-shaped enclosure 202. This conical enclosure 202 may be constructed from acoustic foam 205 material, which possesses sound-absorbing and directional sound projection properties. The conical structure is mounted on a ball-and-socket joint, enabling it to be manually or electronically tilted and rotated in various directions. By adjusting the orientation of the cone, the direction of horn sound precisely controlled, allowing it to be focused toward a particular side such as the front, rear, or either lateral side of the vehicle based on the position of the detected element.
[0085] In another embodiment of the present invention, the horn surrounded by a set of movable acoustic baffles or panels that are motor-controlled. These baffles open or close selectively on different sides of the horn, redirecting and narrowing the sound beam toward a targeted direction.
[0086] In another embodiment of the present invention, the horn mounted on a motor-controlled rotating base. The detection module sends directional data to the motor, which then physically rotates the horn to face the area where an object is detected. This embodiment eliminates the need for sound-redirecting materials, instead relying on the direct movement of the horn to adjust its output direction.
[0087] In another embodiment of the present invention, a flexible, inflatable acoustic channel may be attached to the horn. When a certain direction is selected for sound emission, only that part of the channel inflates to form a funnel-like guide, channelling the sound waves specifically toward the target direction. The rest of the structure remains deflated to avoid obstructing other paths.
[0088] In yet another embodiment of the present invention, a set of multiple smaller horns arranged in a circular or square pattern around the vehicle’s front or rear. Based on the detected object’s direction, only the horn(s) facing that direction are activated, while the others remain silent. This directional array technique allows for highly localized signalling without any need for mechanical movement.
[0089] The direction in which the foam 205 should be deployed is controlled by the horn direction control unit 201, which takes data from the monitoring arrangement. The monitoring arrangement (front and rear sensing unit 104, 101) detects where an object or person is located front, rear, left, or right. The horn direction control unit 201 then decides which side’s foam 205 should be deployed to direct the horn sound toward that specific area. This process happens in real time, allowing fast horn direction adjustment.
[0090] For example, while driving, if the camera 105 detects that the vehicle needs to honk on the left side, perhaps to alert pedestrians or other vehicles, and the right side is clear, then the first roller 203 unrolls the foam 205 sheet on the right side to cover it.
[0091] In case the ambient noise level in the surrounding area is very high, making it difficult for the sound of the horn to be heard clearly by pedestrians or other users in front of the vehicle (during high intensity rains), the horn direction control unit 201 automatically responds by increasing the intensity of the horn sound, which ensures that the alert is loud and clear, effectively alerting others nearby that the vehicle is giving a warning or indicating its presence.
[0092] Conversely, if the surrounding environment is extremely quiet, with little to no ambient noise, the horn emit sound at a lower intensity, which is because even low-volume sounds might be audible from a distance in silent areas, and excessive noise in such environments may be disruptive or unnecessary.
[0093] The horn direction control unit 201 is also connected to a database, which contains a list of locations (e.g., residential areas, hospitals, schools, highways) and the recommended horn volume levels for different times of the day (e.g., quieter at night). When the vehicle enters one of these locations, the system automatically refers to this database and modulates the horn intensity accordingly. This helps reduce noise pollution in sensitive areas and ensures compliance with local sound regulations.
[0094] Furthermore, the system further includes a GPS (Global Positioning System) unit integrated within the vehicle, configured to continuously determine the real-time geographical location of the vehicle with high accuracy. This GPS unit functions by communicating with a network of satellites orbiting the Earth, typically requiring signals from at least four satellites to compute the vehicle's exact position through a process called triangulation.
[0095] Once the location is determined, the GPS coordinates are transmitted to the control unit, which interprets this spatial data in context with the database. When the vehicle enters a zone covered in the database, the GPS unit communicates the current location data to the control unit, which then compares it with the database entries.
[0096] A magnitude control module is executable by the processor from the integrated memory and configured to regulate the output sound level (magnitude) of the vehicle's horn. This magnitude module functions in coordination with the GPS unit and a real-time clock to determine both the current location of the vehicle and the current time of day, which are critical parameters in deciding how loud the horn should be in a given situation. The goal of the magnitude control module is to ensure the horn operates within permissible sound limits, reducing unnecessary noise pollution while still delivering effective alerts to nearby entities.
[0097] The magnitude control module receives two primary inputs:
• Instantaneous location data from the GPS unit, and
• Time and duration of the day from an internal real-time clock or a network-synchronized time module.
[0098] In an embodiment of the present invention, the magnitude control module processes these inputs using a predefined rule set or lookup table, which contains a catalog of locations (such as school zones, hospital areas, residential neighborhoods, highways, etc.) along with corresponding horn magnitude thresholds defined for different times of the day (e.g., daytime vs. nighttime restrictions).
[0099] In an embodiment of the present invention, upon receiving the real-time data, the magnitude control module uses a rule-based decision protocols to determine whether the current horn magnitude is suitable or needs adjustment. For example, if the vehicle is currently located in a residential area during night-time hours, the horn direction control unit 201 automatically reduces the horn's output to a lower intensity level. Conversely, in high-traffic areas during peak hours or on highways, the horn may be allowed to operate at its full power, provided it complies with legal standards.
[00100] Once the magnitude control module computes the appropriate horn intensity, it communicates this directive to the horn actuation circuit. In an embodiment of the present invention, digital signal processors (DSPs) may be used to precisely tune the frequency and volume of the horn output based on the control module’s instructions.
[00101] In another embodiment of the present invention, the magnitude control module may include wireless communication capabilities (such as 4G, LTE, or Wi-Fi) to access live regulatory databases or traffic authority updates, ensuring that newly declared silence zones or temporary restrictions are honoured in real-time.
[00102] In another embodiment of the present invention, the module may also be user-configurable through a dashboard interface, allowing customization of horn behaviour based on user preferences or professional use cases (e.g., emergency vehicles).
[00103] For instance, if the vehicle enters a school zone during school hours, the horn direction control unit 201 reduces the horn’s sound intensity to a pre-defined level to comply with noise regulations and minimize disturbance to the environment. Similarly, in industrial or highway zones, the horn output may be maintained or amplified as per safety requirements.
[00104] Initially, whenever the vehicle enters or is detected within these zones, the control unit directs the audio transducer to produce an alert warning to inform the user to communicate that honking is prohibited in the current area. Despite this warning, if the user still attempts to honk repeatedly within the restricted zone, the horn direction control unit 201 override the user's action and automatically disable the honking entirely for a certain period. This precaution ensures compliance with local regulations and helps maintain a quiet environment in sensitive zones, thereby promoting safety and respect for areas such as hospitals and schools.
[00105] In an embodiment of the present invention, the GPS unit may also be configured to interface with real-time mapping services or local traffic regulation servers to receive live updates on temporary silence zones, diversions, or restricted areas. This enables the control unit to respond not only to static geographical boundaries but also to dynamic or time-sensitive conditions, ensuring maximum adaptability and environmental compliance.
[00106] When the GPS unit detects the real-time location of the vehicle, the sound control module retrieves this location data and cross-references it with the database that contains a list of sensitive or regulated zones. These may include hospital areas, school zones, residential neighborhoods, or officially designated no-honking zones. Upon confirming that the vehicle is within one of these regions, the sound control module automatically initiates a command to regulate the horn’s sound amplitude accordingly.
[00107] This regulation may include reducing the horn's volume, completely disabling the horn. The internal circuitry of the sound control module may achieve this modulation by digitally attenuating the signal sent to the horn’s transducer or by adjusting the electrical power supplied to the horn. For example, if the vehicle enters a hospital zone, the module may restrict the horn volume to a minimum level or prevent horn activation altogether, thereby complying with noise-restriction regulations.
[00108] In addition to location-based modulation, the sound control module also utilizes GPS-based time data to implement time-sensitive horn control. This is especially relevant in residential areas, where horn usage may be prohibited or discouraged during night-time hours. The GPS timestamp allows the module to determine the current time of day with high precision. If the system identifies that the vehicle is in a quiet zone during a restricted time window such as late night or early morning the horn's amplitude is automatically adjusted to a lower level, or the sound is redirected internally to notify the user without external disturbance.
[00109] Furthermore, the sound control module interacts seamlessly with other onboard subsystems, such as the magnitude control module and horn direction horn direction control unit 201, to ensure that both the intensity and the direction of the sound are appropriate for the detected location. This coordination enables a context-aware and multi-dimensional control over how.
[00110] The present invention includes a method for identifying and alerting an obstruction in the vicinity of the vehicle. (as illustrated in fig 3)
[00111] STEP I: DETECTING A PRESENCE OF AN OBSTACLE 301
The method begins by detecting the presence of any obstacle or element in the vicinity of the vehicle using a monitoring arrangement mounted on the vehicle. This monitoring arrangement includes front and rear sensing unit 104, 101, each equipped with a camera 105, sound sensor 106, thermal sensor 102, or ultrasonic sensor 103 depending on their positioning. These sensors 106, 102, 103 continuously scan the surrounding environment. If any deviation or reflection pattern is captured whether thermal, ultrasonic, or visual the system flags the presence of an object or entity as an obstacle.
[00112] STEP II: DETERMINING A TYPE OF THE OBSTACLE 302
Once the presence of an obstacle is confirmed, the system initiates classification using a detection module. The input image or thermal data is analysed to determine whether the obstacle is a pedestrian, animal, cyclist, or non-living object (e.g., road barrier, other vehicle).
[00113] STEP III: SENSING A DIRECTION OF THE OBSTACLE 303
After identifying the type, the system determines the direction from which the obstacle is detected whether it’s in front, behind, or to the left or right of the vehicle. This is achieved by referencing sensor positioning and coverage angles. For example, the rear thermal and ultrasonic sensor 102, 103 detect direction based on the angle of signal return, while the camera 105 and microphone triangulate direction based on the visual frame and sound origin. The system maps this directional data relative to the vehicle’s body coordinates.
[00114] STEP IV: CALCULATING A DISTANCE OF THE OBSTACLE FROM THE VEHICLE 304
Using the ultrasonic sensor 103 (in rear) or image-based depth estimation techniques (in front), the distance of the obstacle is calculated. For ultrasonic methods, the time delay between emission and reflection of sound waves is measured. For camera 105, it processes the visual data to estimate how far the object is. This distance metric is important to decide the urgency and loudness of the subsequent alert.
[00115] STEP V: SELECTING A PRE-DEFINED AUDIO TONE PRE-DESIGNATED TO THE DETECTED TYPE OF THE OBSTACLE 305
A library of pre-defined audio tones is stored in a database within the horn direction control unit 201, each tone mapped to a specific type of obstacle. For instance, a high-pitched tone may signify a pedestrian, while a lower tone may indicate a barrier or a vehicle. Upon determining the type of obstacle, the system retrieves the associated tone from the database, ensuring that the alert is semantically meaningful to the user or on-board systems.
[00116] STEP VI: FETCHING THE INSTANT LOCATION OF THE VEHICLE 306
Parallel to the tone selection process, the system also fetches the real-time geographic location of the vehicle using an integrated GPS unit. This location data is essential for contextualizing the alert and may be used to modulate the horn's magnitude or apply location-specific behavioural rules, such as reduced horn volume in school zones or residential areas.
[00117] STEP VII: OBTAINING THE INSTANT TIME OF THE DAY 307
The system then obtains the current time of day from an internal clock or network-synchronized time module. This time data, combined with the location information, allows the system to adapt horn and alert behavior based on regulatory constraints or user-defined settings. For example, late-night alerts in quiet zones may trigger softer tones to avoid noise pollution, whereas daytime alerts may be louder for higher effectiveness in traffic.
[00118] STEP VIII: PLAYING THE SELECTED TONE AT A VOLUME IN ACCORDANCE WITH THE CALCULATED DISTANCE, FETCHED LOCATION AND THE OBTAINED TIME OF THE DAY 308
The alert unit 107 (such as an audio transducer) installed inside the vehicle plays the selected tone. However, the volume of the tone is adjusted based on how near or far the obstacle is closer obstacles trigger louder alerts to indicate urgency. The sound control module or magnitude control module handles this amplitude regulation dynamically by interpreting the distance input from previous steps and modulating the output accordingly.
[00119] STEP IX: DIRECTIONALLY LIMITING A HORN OF THE VEHICLE TO GENERATE AN ALERT IN THE SENSED DIRECTION 309
In addition to internal alert tones, the horn direction control unit 201 comprises a first and second motorized rollers 203, 204 that channels the sound output of the horn in a particular direction left, right, front, or rear corresponding to where the obstacle is detected. This directional limitation prevents unnecessary sound spread, reduces noise pollution, and effectively notifies only the relevant subject or location.
[00120] In a preferred embodiment of the present invention, the best method works in following manner, where monitoring arrangement that is mounted over an external surface of the vehicle. This monitoring arrangement is responsible for enabling identification of elements located in proximity to the vehicle. The monitoring arrangement includes the rear sensing unit 101 that is installed over the rearwards surface of the vehicle. The rear sensing unit 101 comprises the thermal sensor 102 and an ultrasonic sensor 103, which together detect both living and non-living elements near the rear of the vehicle, as well as calculate their distance. Complementing this, the front sensing unit 104 is mounted over the front surface of the vehicle. The front sensing unit 104 comprises the camera 105 that captures images to identify elements in front of the vehicle, and the sound sensor 106, specifically the microphone, that detects the magnitude of noise in the surrounding environment. The data collected from the rear sensing unit 101 and front sensing unit 104 is transmitted to the machine learning-based detection module. The detection module receives the sensed data and processes it to determine the type of element detected and the distance of the element from the vehicle. This processed information is critical in the generation of appropriate audible responses and directional signaling.
[00121] The alert unit 107, which is installed inside the vehicle, is used to generate an audible alert. The alert unit 107 is an audio transducer that produces sounds indicating the type and proximity of detected elements. The alert unit 107 is capable of modifying both the tone and intensity of the alert to reflect the type of element detected and its distance from the vehicle. In support of user customization, the user profile module is configured with the control unit to enable the creation of personalized user profiles. Each profile is equipped with the user-selected set of tones and intensities for the alert unit 107, allowing users to experience the tailored audible alert. To direct the horn sound appropriately based on obstacle location, the horn direction control unit 201 is installed over the horn of the vehicle. This unit controls the direction of sound of the horn in accordance with the direction of elements detected by the monitoring arrangement. The horn direction control unit 201 includes an enclosure 202 designed for installation over the horn, and the first and second rollers 203, 204 mounted on each lateral portion of the enclosure 202. Each first and second roller 203, 204 contains the spool of acoustic foam 205, which is selectively deployed to direct the sound in the specific direction. Furthermore, the horn direction control unit 201 is connected with the database that stores the list of locations along with respective suitable magnitude of horn based on the time of day.
[00122] For determining the vehicle’s real-time location, the GPS (global positioning system) unit is integrated into the system. The GPS unit detects the instant location of the vehicle and transmits this data for horn control regulation. This data is processed by the magnitude control module, which uses the instant location and the duration of the day to determine the suitable magnitude of the horn. This ensures that the horn operates within the acceptable limits set for specific areas and times. In addition, the sound control module is provided in operative communication with the control hub of the vehicle. The sound control module receives captured data from the front sensing unit 104, including noise level data from the sound sensor 106, to dynamically control the amplitude of the horn of the vehicle. This adaptive control helps optimize horn sound based on current environmental noise. The complete method for uniquely identifying an obstruction in the vicinity of the vehicle follows these steps: First, the system detects the presence of an obstacle in the vicinity of the vehicle. Then, it determines the type of the obstacle, senses the direction of the obstacle, and calculates the distance of the obstacle from the vehicle. Based on this data, the pre-defined audio tone is selected which uniquely identifies the type of obstacle. The selected tone is then played by the alert unit 107 at the volume that corresponds to the calculated distance. Simultaneously, the horn of the vehicle is directionally limited by the horn direction control unit 201 to generate the alert in the sensed direction.
[00123] Although the field of the invention has been described herein with limited reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as alternate embodiments of the invention, will become apparent to persons skilled in the art upon reference to the description of the invention. , Claims:1) A directional audible signalling system for vehicles, comprising:
i) a monitoring arrangement mounted over an external surface of a vehicle to enable identification of elements located in proximity to the vehicle;
ii) a machine learning-based detection module, executable by a processor from an integrated memory, to receive sensed data from the monitoring arrangement to determine a type of element detected and a distance of the element from the vehicle;
iii) an alert unit 107 installed inside the vehicle to generate an audible alert to notify the user regarding the detected element;
iv) a horn direction control unit 201 installed over the horn of the vehicle, to control a direction of sound of the horn in accordance with direction of elements detected by the monitoring arrangement; and
v) a magnitude control module, executable by the processor from the integrated memory, configured to receive an instant geographic location and time duration of the day to determine a suitable magnitude of the horn of the vehicle, to accordingly cause the magnitude of the horn to be regulated.
2) The system as claimed in claim 1, wherein the monitoring arrangement comprises a rear sensing unit 101 installed over a rearwards surface of the vehicle to detect presence of elements in proximity to the rear of the vehicle.
3) The system as claimed in claim 2, wherein the rear sensing unit 101 comprises a thermal sensor 102 and an ultrasonic sensor 103 to detect living and non-living elements and a distance of the elements from the rear of the vehicle.
4) The system as claimed in claim 1, wherein the monitoring arrangement further comprise a front sensing unit 104 mounted over a front surface of the vehicle to detect presence of elements in proximity to the front of the vehicle.
5) The system as claimed in claim 7, wherein the front sensing unit 104 comprises a camera 105 to capture image in front of the vehicle to detect elements in front of the vehicle, and a sound sensor 106 to detect magnitude of noise in vicinity of the vehicle.
6) The system as claimed in claim 1, wherein the alert unit 107 is configured to generate the audible alert of a tone and intensity corresponding to the type and distance of the detected element.
7) The system as claimed in claim 1, wherein the alert unit 107 is an audio transducer.
8) The system as claimed in claim 1, further comprising a user profile module configured with a control unit to facilitate personalised user profiles, each configured with a user-selected set of tones and intensities for the alert unit 107.
9) The system as claimed in claim 8, wherein the sound sensor 106 is a microphone.
10) The system as claimed in claim 1, further comprising a sound control module is provided in operative communication with a control hub of the vehicle, configured to receive captured data from the front sensing unit 104 to accordingly control an amplitude of a horn of the vehicle.
11) The system as claimed in claim 1, wherein the horn direction control unit 201 comprises an enclosure 202 adapted to be installed over the horn of the vehicle, a first motorized roller 203 on a first inward side of the enclosure 202 and a second motorized roller 204 on a second inward side of the enclosure 202, acoustic foam 205 rolled around each of the first and second motorized roller 203, 204, wherein a processing unit is configured to, based on data from the monitoring arrangement, selectively activate one of the motorized first and second rollers 203, 204 to unroll the acoustic foam 205 to cover at least one side of the enclosure 202, thereby directing sound from the horn predominantly towards an opposing side.
12) The system as claimed in claim 1, further comprising a GPS (global positioning system) unit provided to detect an instant location of the vehicle to regulate the magnitude of the horn.
13) The system as claimed in claim 1, further comprising a database connected with the horn direction control unit 201 that contains a list of locations and respective suitable magnitude of horn as per time of day.
14) A method for identifying and alerting an obstruction in vicinity of a vehicle, comprising steps of:
i) detecting a presence of an obstacle in vicinity of the vehicle 301;
ii) determining a type of the obstacle 302;
iii) sensing a direction of the obstacle 303;
iv) calculating a distance of an obstacle from the vehicle 304;
v) selecting a pre-defined audio tone pre-designated to the detected type of the obstacle 305;
vi) fetching an instant location of the vehicle 306;
vii) obtaining an instant time of the day 307;
viii) playing the selected tone at a volume in accordance with the calculated distance, fetched location and the obtained time of the day 308; and
ix) directionally limiting a horn of the vehicle to generate an alert in the sensed direction 309.
| # | Name | Date |
|---|---|---|
| 1 | 202511074556-STATEMENT OF UNDERTAKING (FORM 3) [05-08-2025(online)].pdf | 2025-08-05 |
| 2 | 202511074556-REQUEST FOR EARLY PUBLICATION(FORM-9) [05-08-2025(online)].pdf | 2025-08-05 |
| 3 | 202511074556-PROOF OF RIGHT [05-08-2025(online)].pdf | 2025-08-05 |
| 4 | 202511074556-POWER OF AUTHORITY [05-08-2025(online)].pdf | 2025-08-05 |
| 5 | 202511074556-FORM-9 [05-08-2025(online)].pdf | 2025-08-05 |
| 6 | 202511074556-FORM FOR SMALL ENTITY(FORM-28) [05-08-2025(online)].pdf | 2025-08-05 |
| 7 | 202511074556-FORM FOR SMALL ENTITY [05-08-2025(online)].pdf | 2025-08-05 |
| 8 | 202511074556-FORM 1 [05-08-2025(online)].pdf | 2025-08-05 |
| 9 | 202511074556-FIGURE OF ABSTRACT [05-08-2025(online)].pdf | 2025-08-05 |
| 10 | 202511074556-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [05-08-2025(online)].pdf | 2025-08-05 |
| 11 | 202511074556-EVIDENCE FOR REGISTRATION UNDER SSI [05-08-2025(online)].pdf | 2025-08-05 |
| 12 | 202511074556-DRAWINGS [05-08-2025(online)].pdf | 2025-08-05 |
| 13 | 202511074556-DECLARATION OF INVENTORSHIP (FORM 5) [05-08-2025(online)].pdf | 2025-08-05 |
| 14 | 202511074556-COMPLETE SPECIFICATION [05-08-2025(online)].pdf | 2025-08-05 |
| 15 | 202511074556-MSME CERTIFICATE [06-08-2025(online)].pdf | 2025-08-06 |
| 16 | 202511074556-FORM28 [06-08-2025(online)].pdf | 2025-08-06 |
| 17 | 202511074556-FORM 18A [06-08-2025(online)].pdf | 2025-08-06 |