Abstract: The present disclosure provides a computer-implemented method (300) for delivering personalized in-cabin alerts in a vehicle. The method includes capturing (304) an image of a driver using a capturing device (115), analyzing (306) the captured image using a face analysis module (113) to identify the driver, retrieving (314) a language preference associated with the identified driver from a driver database (106), setting (316) an alert system language according to the retrieved language preference, and delivering in-cabin alerts in the set alert system language using an in-cabin alert sub-system (131). The method enables real-time adaptation of alert language based on driver identification, enhancing communication and safety in multi-driver vehicles. (Fig. 1)
Description:FORM 2
THE PATENTS ACT 1970
(39 of 1970)
&
The Patents Rules, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION: “METHOD AND SYSTEM FOR PROVIDING PERSONALIZED MULTILINGUAL IN-CABIN ALERTS”
2. APPLICANTS:
(A) NAME : NERVANIK AI LABS PVT. LTD.
(B) NATIONALITY : INDIAN
(C) ADDRESS : A – 1111, WORLD TRADE TOWER
OFF. S G ROAD
B/H SKODA SHOWROOM, MAKARBA
AHMEDABAD 380 051
GUJARAT INDIA
PROVISIONAL
The following specification describes the invention. COMPLETE
The following specification particularly describes the invention and the manner in which it is to be performed.
FIELD OF INVENTION
[01] The present disclosure relates to vehicle communication systems, and more particularly to a personalized multilingual in-cabin alert system tailored to the driver's language preference for enhanced communication and safety. The invention integrates facial recognition technology for driver identification with a language-specific alert system to deliver in-cabin notifications in the driver's preferred language.
BACKGROUND
[02] The evolution of vehicle communication systems has been a significant aspect of automotive technology development over the past few decades. Initially, in-cabin alerts were limited to simple auditory cues such as beeps or chimes, often accompanied by basic visual indicators on the dashboard. As technology progressed, these systems became more sophisticated, incorporating voice alerts and digital displays to convey information to drivers.
[03] In recent years, the automotive industry has witnessed a rapid advancement in vehicle communication technologies. Modern vehicles are equipped with complex infotainment systems that integrate multiple functions, including navigation, entertainment, and safety alerts. These systems typically utilize a combination of visual displays, audio notifications, and haptic feedback to communicate with the driver. The integration of artificial intelligence and machine learning model has further enhanced the capabilities of these systems, allowing for more context-aware and personalized interactions.
[04] Current industry practices in vehicle communication systems focus on providing standardized alerts and notifications across a wide range of vehicles. Many manufacturers offer multilingual support in their infotainment systems, allowing users to select their preferred language from a predefined list. However, this approach often requires manual configuration by the driver or relies on the vehicle's regional settings, which may not always align with the driver's language preference.
[05] Statistical data from the automotive industry indicates that a significant portion of commercial fleet vehicles and rental cars are operated by drivers from diverse linguistic backgrounds. According to a 2022 report by the International Transport Forum, approximately 30% of commercial truck drivers in Europe work in countries where they are not native speakers of the local language. This linguistic diversity presents challenges in ensuring effective communication of critical safety information and operational instructions.
[06] Existing technologies in the field of in-cabin alerts typically rely on preset language configurations or manual selection by the driver. While these systems offer some level of customization, they often fall short in scenarios where multiple drivers share a vehicle or when a driver operates different vehicles within a fleet. The manual nature of language selection can lead to inconsistencies and potential safety risks if a driver forgets to adjust the settings or is unable to navigate the interface in an unfamiliar language.
[07] Prior art in this domain includes systems that utilize driver profiles stored in the vehicle's onboard computer. These profiles may contain language preferences along with other personalized settings. However, such systems are limited by their reliance on manual profile selection or key fob recognition, which may not always accurately identify the current driver, especially in shared vehicle environments.
[08] Another approach found in existing technologies involves the use of Smartphone integration to import driver preferences into the vehicle's infotainment system. While this method offers some level of personalization, it is dependent on the driver's device being properly connected and configured, which may not always be feasible or reliable in all driving scenarios.
[09] Voice recognition technologies have also been employed in some vehicle communication systems to identify the driver and adjust settings accordingly. However, these systems can be prone to errors in noisy environments or when dealing with accents and dialects, potentially leading to misidentification and incorrect language selection.
[10] The limitations of current systems become particularly apparent in fleet management scenarios, where vehicles are operated by multiple drivers with diverse language backgrounds. The inability to quickly and accurately adapt to each driver's language preference can result in miscommunication, reduced efficiency, and potential safety hazards.
[11] Furthermore, existing solutions often lack real-time adaptation capabilities. They may not be able to seamlessly switch between language preferences when a new driver takes over the vehicle mid-journey, a common occurrence in long-haul trucking or car-sharing services. This gap in functionality can lead to situations where critical alerts or instructions are delivered in a language not understood by the current driver.
[12] Another challenge with conventional systems is their limited integration with advanced driver assistance systems (ADAS) and emerging autonomous vehicle technologies. As vehicles become more sophisticated and take on more driving tasks, the need for clear, personalized communication between the vehicle and the driver becomes increasingly critical.
[13] The technical limitations of current in-cabin alert systems also extend to their scalability and updateability. Many existing solutions are hardcoded into the vehicle's firmware, making it difficult to add new languages or update alert content without significant software overhauls or recalls.
[14] Hence, it was needed to invent a personalized multilingual in-cabin alert system tailored to the driver's language preference for enhanced communication and safety.
SUMMARY
[15] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description.
[16] The present disclosure provides a personalized multilingual in-cabin alert system for vehicles that delivers notifications in the driver's preferred language. The system integrates facial recognition technology with language mapping to identify drivers and customize alerts accordingly. Key components include a driver management system for storing driver profiles and language preferences, a facial recognition sub-system for driver identification, a language mapping module for retrieving language preferences, an in-cabin alert sub-system for delivering language-specific notifications, and a real-time integration module for monitoring driver changes. Present invention addresses limitations of conventional alert systems by automatically adapting to different drivers without manual intervention. The system enhances safety and communication effectiveness, particularly in multi-lingual environments and fleet operations with diverse driver populations. By ensuring alerts are delivered in a language the driver readily understands, the system minimizes miscommunication risks and improves response times to critical information, ultimately creating a more user-friendly driving experience across linguistic backgrounds.
[17] By combining these components into a cohesive system, the present disclosure offers a novel solution that enhances vehicle communication, improves safety, and provides a more user-friendly driving experience across diverse linguistic backgrounds.
OBJECTIVE OF THE INVENTION
[18] The main objective of the present disclosure is to provide a personalized multilingual in-cabin alert system for vehicles that delivers notifications in the driver's preferred language, enhancing communication and safety.
[19] Another objective of the present disclosure is to integrate facial recognition technology for accurate and efficient driver identification within the vehicle environment.
[20] Another objective of the present disclosure is to implement a real-time language mapping system that retrieves and applies the identified driver's language preferences for in-cabin alerts.
[21] Another objective of the present disclosure is to develop a dynamic alert system capable of adapting to driver changes without manual intervention or reconfiguration.
[22] Another objective of the present disclosure is to create a centralized driver management system that facilitates easy updates and modifications to driver profiles, including language preferences.
[23] Yet another objective of the present disclosure is to improve road safety by minimizing miscommunication due to language barriers in multi-lingual or fleet operation environments.
BRIEF DESCRIPTION OF FIGURES
[24] Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which:
[25] Fig. 1 provides a block diagram of a personalized multilingual in-cabin alert system, illustrating the interconnected components and data flow between subsystems.
[26] Fig. 2 presents a flowchart detailing the driver detail registration and management process.
[27] Fig. 3 illustrates a flowchart of method for implementing the personalized multilingual in-cabin alert system.
[28] Common reference numerals are used throughout the figures to indicate similar features.
DETAILED DESCRIPTION
[29] The systems and methods described herein may be implemented in any form of computing or electronic device. The term "computer," as used herein, encompasses any device with processing capabilities sufficient to execute instructions. This includes, but is not limited to, personal computers, servers, mobile devices, personal digital assistants, and similar devices.
[30] Such devices may include one or more processors, such as microprocessors, controllers, or other suitable types of processors, capable of executing instructions to control the device's operation. For example, in some implementations using a system-on-a-chip architecture, the processors may include fixed-function blocks (hardware accelerators) that perform parts of the method in hardware rather than software or firmware. Platform software, such as an operating system or similar, may be installed to support the execution of application software.
[31] The described functionality may be implemented in hardware, software, or any combination thereof. When implemented in software, the instructions or code can be stored on or transmitted via a computer-readable medium. Such media include computer-readable storage media, which may be volatile or non-volatile, removable or non-removable, and implemented using any technology for storing information such as program code, data structures, or other data. Examples include, but are not limited to, ROM, EEPROM, RAM, magnetic or optical storage, flash memory, or any other storage medium accessible by a computer. Communication media that facilitate the transfer of software, such as via coaxial cables, fiber optics, DSL, or wireless signals, may also be considered part of computer-readable media.
[32] The computing device may operate as a standalone system or as part of a distributed system, where tasks are performed collectively by multiple devices connected via a network. Such devices may communicate over a network connection to perform the described functionality. For instance, software may be stored on a remote computer and accessed by a local device, which may download and execute portions of the software as needed. Similarly, some instructions may be processed locally, while others may execute on remote systems or networks. In some cases, the computing device may be remote and accessible via a communication interface. Storage of program instructions may also be distributed across a network or stored in a combination of local and remote locations. For example, software may reside on a remote computer and be accessed by a local terminal, or the system may execute some software locally while other components operate on remote servers.
[33] As used herein, the term "system" may refer to a collection of interconnected components, devices, or modules that work together to perform specific functions or achieve a common goal. A system may include hardware, software, or a combination thereof.
[34] As used herein, the term "module" may refer to a functional unit or component of a larger system, which may be implemented in hardware, software, firmware, or a combination thereof. A module may perform specific tasks or operations within the overall system.
[35] As used herein, the term "sub-system" may refer to a smaller, self-contained part of a larger system that performs a specific function or set of functions. A sub-system may comprise multiple modules working together to achieve its designated purpose within the overall system architecture.
[36] As used herein, the term "network" may refer to a system of interconnected devices, computers, or nodes that can communicate and exchange data with each other. Networks may include various types of connections, such as wired or wireless, and may operate on different scales, from local to global.
[37] As used herein, the term "interface" may refer to a point of interaction between two or more components, systems, or entities. An interface may be implemented in hardware or software and may facilitate communication, data exchange, or user interaction with a system.
[38] As used herein, the term "database" may refer to an organized collection of data stored and accessed electronically. A database may be structured to facilitate efficient retrieval, updating, and management of the stored information.
[39] As used herein, the term "data" may refer to information in a form that can be processed, stored, or transmitted by a computer system. Data may include various types of information, such as text, numbers, images, or other forms of digital content.
[40] The Personalized multilingual in-cabin alert system (100) is designed to deliver personalized in-cabin alerts in a driver's preferred language. As illustrated in fig. 1, the system comprises several interconnected components that work together to provide a tailored communication experience. The system utilizes facial recognition technology combined with language mapping to ensure that vehicle alerts, warnings, and notifications are delivered in a language that the driver can readily understand, enhancing safety and improving the overall driving experience.
[41] A driver management unit (101) serves as the central hub for storing and managing driver information, including language preferences. The driver management unit (101) interfaces with a driver identification unit (111), which is responsible for recognizing and identifying the driver upon entering the vehicle. The unit maintains comprehensive driver profiles that include language preferences and other personalization parameters, employing encryption protocols to protect sensitive driver information.
[42] Once the driver is identified, a language mapping module (121) retrieves the driver's preferred language from the stored data. This information is then utilized by an in-cabin alert sub-system (131), which generates and delivers alerts, notifications, and instructions in the driver's chosen language through the vehicle's audio system. The language mapping module (121) incorporates linguistic model capable of handling multiple language variants and maintains a mapping table that correlates driver ids with specific language codes.
[43] To ensure continuous accuracy and adaptability, a real-time integration sub-system (141) monitors the vehicle's status and detects any changes in the driver. This sub-system allows the personalized multilingual in-cabin alert system (100) to dynamically adjust its language settings based on real-time information. The real-time integration sub-system (141) operates at a high polling frequency, sampling driver presence and engine status multiple times per second.
[44] The Personalized multilingual in-cabin alert system (100) addresses the technical challenge of providing clear communication to drivers of various linguistic backgrounds. By automatically identifying the driver and delivering alerts in their preferred language, the system enhances safety, improves driver comprehension, and reduces the risk of miscommunication due to language barriers.
[45] The driver management unit (101) of the personalized multilingual in-cabin alert system (100) comprises a web interface (103) and a driver database (106), as illustrated in fig. 1. The driver management unit (101) facilitates the registration and management of driver information, including language preferences. The unit is designed with a distributed architecture that allows for both local and remote data access, making it suitable for individual vehicle implementations as well as fleet management scenarios.
[46] A web interface (103) provides a user-friendly platform for inputting and updating driver details. The web interface (103) allows authorized personnel to access and modify driver information through a web browser, enabling efficient management of driver profiles from various locations. The interface supports multiple authentication methods and includes batch processing capabilities for efficiently handling multiple driver records simultaneously.
[47] A driver database (106) stores the driver information entered through the web interface (103). The driver database (106) may be implemented as a cloud-based or on-premise solution, depending on the specific requirements of the system deployment. The database employs a relational structure optimized for quick retrieval operations, with indexed fields for driver identification parameters to minimize query response times.
[48] The driver management unit (101) interacts with other components of the personalized multilingual in-cabin alert system (100), including the driver identification unit (111), the language mapping module (121), and the in-cabin alert sub-system (131). The driver information stored in the driver database (106) is utilized by these components to provide personalized in-cabin alerts based on the identified driver's language preferences.
[49] The web interface (103) and driver database (106) work in conjunction to maintain up-to-date driver profiles, ensuring that the personalized multilingual in-cabin alert system (100) has access to accurate and current information for each registered driver. This integration enables the real-time integration sub-system (141) to adapt the alert language dynamically based on the most recent driver data.
[50] The driver identification unit (111) of the personalized multilingual in-cabin alert system (100) comprises a face analysis module (113) and a capturing device (115), as illustrated in fig. 1. The driver identification unit (111) is responsible for recognizing and identifying the driver upon entering the vehicle. This unit represents the system's primary interface with the physical world, converting visual information into digital data that can be processed by subsequent system components.
[51] A capturing device (115) is utilized to capture images of the driver's face. The capturing device (115) may be a camera positioned within the vehicle cabin to obtain clear and consistent images of the driver. The camera may be strategically placed to capture frontal views of the driver's face, ensuring optimal image quality for subsequent analysis. The capturing device (115) typically consists of a high-resolution digital camera with wide dynamic range technology to handle challenging lighting conditions and infrared capabilities for reliable operation in low-light environments.
[52] A face analysis module (113) processes the images captured by the capturing device (115). The face analysis module (113) employs facial recognition model to analyze the driver's facial features and compare them against stored driver profiles in the driver database (106). The facial recognition process involves extracting key facial landmarks, measuring distances between features, and generating a unique facial signature for each driver. The module utilizes deep convolutional neural networks trained on diverse facial datasets to ensure recognition accuracy across different ethnicities, ages, and genders.
[53] The driver identification unit (111) works in conjunction with other components of the personalized multilingual in-cabin alert system (100). Once the driver is identified, the driver identification unit (111) communicates this information to the language mapping module (121), which retrieves the corresponding language preference from the driver database (106). This information is then utilized by the in-cabin alert sub-system (131) to deliver alerts in the driver's preferred language.
[54] The real-time integration sub-system (141) interacts with the driver identification unit (111) to monitor for any changes in the driver during vehicle operation. If a new driver is detected, the face analysis module (113) and capturing device (115) work together to identify the new driver, allowing the system to update the language settings accordingly.
[55] The integration of the face analysis module (113) and capturing device (115) within the driver identification unit (111) enables the personalized multilingual in-cabin alert system (100) to provide a seamless and automated driver identification process. This automation enhances the system's ability to deliver personalized in-cabin alerts without requiring manual input from the driver, thereby improving both convenience and safety.
[56] The language mapping module (121) and in-cabin alert sub-system (131) of the personalized multilingual in-cabin alert system (100) work together to deliver personalized alerts to drivers in their preferred language. As illustrated in fig. 1, these components interact with other modules to provide a seamless and efficient communication system within the vehicle.
[57] The language mapping module (121) retrieves the language preferences of the identified driver from the driver database (106). Once the driver identification unit (111) has successfully recognized the driver, the language mapping module (121) queries the driver database (106) to obtain the corresponding language setting. This process ensures that the system has access to the most up-to-date language preference for each driver.
[58] The in-cabin alert sub-system (131) utilizes the language preference information provided by the language mapping module (121) to generate and deliver alerts in the driver's chosen language. The in-cabin alert sub-system (131) comprises a text-to-speech module (133) and an in-vehicle audio playback device (135), which work in tandem to produce audible alerts.
[59] A text-to-speech module (133) converts textual alert messages into spoken language. The text-to-speech module (133) supports multiple languages and accents, allowing for accurate pronunciation and natural-sounding speech output. Said module employs advanced linguistic models to ensure proper intonation and emphasis in the generated speech.
[60] The in-vehicle audio playback device (135) is responsible for playing the audio alerts generated by the text-to-speech module (133). The in-vehicle audio playback device (135) may include, but not limited to, vehicle’s own audio system, external or separate audio device, and additional speakers that are strategically placed within the vehicle cabin to ensure clear and audible delivery of alerts to the driver. The system implements intelligent audio mixing that temporarily reduces the volume of entertainment content when delivering critical alerts.
[61] The in-cabin alert sub-system (131) may utilize pre-recorded audio messages in addition to the text-to-speech module (133). These pre-recorded messages may be stored in various languages and accessed based on the driver's language preference. The system may select between text-to-speech generation and pre-recorded audio playback depending on factors such as alert type, complexity, or system configuration.
[62] The real-time integration sub-system (141) continuously monitors for changes in the driver or vehicle status. If a new driver is detected or the engine is restarted, the real-time integration sub-system (141) triggers the driver identification unit (111) to re-identify the driver. This process ensures that the language mapping module (121) and in-cabin alert sub-system (131) always have access to the correct language preferences for the current driver.
[63] By integrating the language mapping module (121) with the in-cabin alert sub-system (131), the personalized multilingual in-cabin alert system (100) provides a robust solution for delivering personalized, language-specific alerts to drivers. This integration enhances communication effectiveness and improves overall safety by ensuring that critical information is conveyed in a language that the driver can readily understand.
[64] The real-time integration sub-system (141) of the personalized multilingual in-cabin alert system (100) ensures continuous monitoring and adaptation of the system to changes in drivers or vehicle status. As illustrated in fig. 1, the real-time integration sub-system (141) comprises an engine status module (143) and a driver change detection module (145).
[65] An engine status module (143) monitors the operational state of the vehicle's engine. The engine status module (143) detects when the engine is started, stopped, or restarted. This information is crucial for determining when to initiate or reinitiate the driver identification process. The engine status module (143) interfaces with the vehicle's engine control unit (ECU) or onboard diagnostic system (OBD) to obtain accurate and immediate status information.
[66] A driver change detection module (145) works in conjunction with the driver identification unit (111) to detect any changes in the driver during vehicle operation. The driver change detection module (145) utilizes data from the capturing device (115) and face analysis module (113) to continuously analyze the driver's facial features and compare them to the initially identified driver. The module employs change detection model that can identify potential driver switches even under challenging conditions.
[67] The real-time integration sub-system (141) communicates bidirectionally with the driver identification unit (111), as shown in fig. 1. When the engine status module (143) detects that the engine has been started or restarted, the real-time integration sub-system (141) signals the driver identification unit (111) to initiate the driver identification process. Similarly, if the driver change detection module (145) detects a potential change in the driver, the real-time integration sub-system (141) triggers a re-identification process through the driver identification unit (111).
[68] Upon detection of a new driver or engine restart, the real-time integration sub-system (141) initiates a cascade of actions within the personalized multilingual in-cabin alert system (100). The driver identification unit (111) captures and analyzes the driver's face, the language mapping module (121) retrieves the corresponding language preference from the driver database (106), and the in-cabin alert sub-system (131) adjusts the alert language accordingly.
[69] The real-time integration sub-system (141) integrates with vehicle telematics and GPS systems for dynamic alert generation. This integration allows the personalized multilingual in-cabin alert system (100) to generate context-aware alerts based on the vehicle's location, speed, and other relevant parameters. For example, the system may generate speed limit warnings or navigation instructions in the driver's preferred language based on real-time GPS data.
[70] By continuously monitoring engine status and driver changes, and integrating with vehicle telematics and GPS systems, the real-time integration sub-system (141) ensures that the personalized multilingual in-cabin alert system (100) remains responsive and adaptive to changing conditions throughout the vehicle's operation. This real-time adaptation enhances the system's ability to provide relevant and timely alerts in the appropriate language, thereby improving driver communication and overall safety.
[71] The Personalized multilingual in-cabin alert system (100) incorporates a driver management process, as illustrated in fig. 2. This process facilitates the registration and management of driver information, including language preferences, which are essential for delivering personalized in-cabin alerts.
[72] The process begins at a step (200), where the system initiates the driver management procedure. From step (200), the process advances to a step (202), where a user accesses a web-based driver management module. This module may be implemented through the web interface (103) of the driver management unit (101), providing a user-friendly platform for inputting and updating driver details.
[73] Following step (202), the process flows to a decision (204), which determines whether the driver being registered is a new driver or an existing driver whose information needs to be updated through the web interface (103). If the decision (204) determines that the driver is new, the process moves to a step (208), where a new driver profile is created in the driver database (106). Alternatively, if the driver’s profile already exists in database, the process proceeds to a step (206), where the existing driver profile is retrieved for updating. The web interface (103) of driver management unit (101) is configured for capturing basic details of the new driver including, but not limited to name, employee ID, driving experience, legally valid driving license, preferred vehicle type, language preference, and so on. The profile creation is finalized by face registration using the capturing device (115) and face registration module (113). The captured data is stored in the driver database (106).
[74] After either creating a new profile or retrieving an existing one, the process converges at a step (210). In step (210), the user enters or updates driver details, including but not limited to the driver's name, identification number, and preferred language for in-cabin alerts. This information forms the basis for the personalized alert system's functionality.
[75] Once the driver details have been entered, the process moves to a decision (212). The decision (212) checks whether all required driver information has been provided and whether the entered data is valid. If the decision (212) determines that the information is incomplete or invalid, the process loops back to step (210), prompting the user to complete or correct the driver details.
[76] If the decision (212) confirms that all required information is complete and valid, the process advances to a step (214). In step (214), the system stores the driver data in the driver database (106), which may be implemented as a cloud-based or on-premise database solution.
[77] The driver management process concludes with a step (216), where the system confirms the successful storage of the driver data. This confirmation may be displayed to the user through the web interface (103), providing assurance that the driver information has been properly recorded and is ready for use by the Personalized multilingual in-cabin alert system (100).
[78] By implementing this comprehensive driver management process, the Personalized multilingual in-cabin alert system (100) ensures that accurate and up-to-date driver information, including language preferences, is available for use by the driver identification unit (111), language mapping module (121), and in-cabin alert sub-system (131). This process forms the foundation for delivering personalized, language-specific alerts to drivers, enhancing communication effectiveness and improving overall safety in the vehicle.
[79] The Personalized multilingual in-cabin alert system (100) operates according to a method (300), as illustrated in fig. 3. The method (300) encompasses a series of steps that enable the system to provide personalized in-cabin alerts based on driver identification and language preferences.
[80] A step (302) initiates the method (300) when a driver enters the vehicle and starts the engine. This action triggers the subsequent steps in the process. The system detects the engine start event through the engine status module (143), which monitors the vehicle's electrical system for ignition signals.
[81] Following step (302), a step (304) activates the capturing device (115) to capture an image of the driver's face. The captured image serves as input for the facial recognition process. The capturing device (115) automatically adjusts exposure settings based on ambient lighting conditions to ensure optimal image quality.
[82] In a step (306), the face analysis module (113) analyzes the facial features of the captured image. This analysis involves extracting key facial landmarks and generating a unique facial signature for comparison against stored driver profiles. The facial analysis process employs a multi-stage pipeline that begins with face detection to locate the driver's face within the captured image.
[83] A step (308) determines whether the driver has been successfully identified based on the facial analysis. If the driver is not identified, the method (300) loops back to step (304) for another capture attempt. This loop ensures that the system continues to attempt driver identification until successful.
[84] Upon successful driver identification, a step (310) activates the driver change detection module (145). The driver change detection module (145) begins monitoring for any changes in the driver during vehicle operation. This continuous monitoring process establishes a baseline from the initially identified driver and implements various detection strategies to identify potential driver changes.
[85] A step (312) determines if there has been a change in the driver. If no change is detected, the personalized multilingual in-cabin alert system (100) continues to use the existing language preference for the identified driver. This decision point represents a continuous evaluation process rather than a one-time check, with the system repeatedly assessing potential driver changes throughout the vehicle's operation.
[86] If a change in driver is detected, a step (314) retrieves the new driver's language preference from the driver database (106). This step ensures that the system adapts to the language needs of the new driver. The language preference retrieval process queries the driver database (106) using the newly identified driver's unique identifier.
[87] In a step (316), the in-cabin alert sub-system (131) sets the alert system language according to the retrieved preference. This adjustment allows for the delivery of alerts in the driver's preferred language. The language setting process configures multiple system components to ensure consistent language presentation across all alert types.
[88] A step (318) activates the engine status module (143) to monitor changes in the engine status. This continuous monitoring enables the system to detect when the vehicle is turned off or restarted. The engine status module (143) interfaces directly with the vehicle's engine control unit (ECU) or onboard diagnostic system (OBD) to obtain accurate and immediate status information.
[89] A step (320) checks if the engine has been restarted. If a restart is detected, the method (300) returns to step (304) to reinitiate the driver identification process. This loop ensures that the system adapts to potential driver changes during vehicle stops.
[90] If no engine restart is detected, a step (322) continues the monitoring process using the driver change detection module (145). This step maintains ongoing surveillance for any changes in the driver, allowing for real-time adaptation of the alert system.
[91] The method (300) demonstrates the adaptive nature of the Personalized multilingual in-cabin alert system (100), continuously monitoring for changes in drivers and engine status to provide appropriate language-specific alerts throughout the vehicle's operation. This operational method integrates the various system components into a cohesive workflow that maintains accurate driver identification and language settings under diverse driving scenarios.
[92] To illustrate the practical application of the personalized multilingual in-cabin alert system (100), consider the following scenario involving a commercial delivery vehicle operated by multiple drivers throughout the day. The vehicle belongs to an interstate logistics company in India with a diverse workforce speaking different native languages.
[93] The fleet manager has previously registered all authorized drivers through the web interface (103), creating profiles in the driver database (106) that include their facial data and language preferences. Driver 1 has Hindi set as her preferred language, while driver 2 has English configured as his preference.
[94] At 6:00 am, driver 1 begins her shift by entering the vehicle and starting the engine. The engine status module (143) detects the engine start event and signals the driver identification unit (111) to initiate the identification process. The capturing device (115) activates and captures driver 1's face, which is then analyzed by the face analysis module (113). The system successfully identifies driver 1 by matching her facial features against the stored profile in the driver database (106).
[95] Upon identification, the language mapping module (121) retrieves driver 1's language preference (Hindi) from the driver database (106). The in-cabin alert sub-system (131) then configures the text-to-speech module (133) to use Hindi for all subsequent alerts. The system confirms the language selection by playing a brief welcome message in Hindi through the in-vehicle audio playback device (135).
[96] As driver 1 drives her delivery route, the system provides various alerts in Hindi. When she exceeds the speed limit on a residential street, the system integrates with the vehicle's GPS and announces warning in Hindi.
[97] Throughout her shift, the driver change detection module (145) continuously monitors for any changes in the driver, while the engine status module (143) tracks the engine's operational state. When driver 1 makes brief stops for deliveries without turning off the engine, the system maintains her language settings without requiring re-identification.
[98] At 2:00 pm, driver 1 completes her shift and returns to the distribution center. She turns off the engine and exits the vehicle. Shortly afterward, driver 2 arrives for his afternoon shift. When he starts the engine, the engine status module (143) detects this event and triggers the identification process. The capturing device (115) captures driver 2's face, and the face analysis module (113) identifies him as a different driver from the previous session.
[99] The language mapping module (121) retrieves driver 2's language preference (English) from the driver database (106), and the in-cabin alert sub-system (131) reconfigures the text-to-speech module (133) accordingly. The system plays a welcome message in English: "welcome, driver 2. The alert system is set to English."
[100] During driver 2's shift, all system alerts are delivered in English. When the vehicle approaches a school zone, the system announces: "attention: school zone ahead. Speed limit is 25 kilometers per hour." when the vehicle requires maintenance based on its mileage, the system notifies: "maintenance reminder: vehicle requires scheduled service within 500 kilometers."
[101] Later in the shift, driver 2 makes a brief stop at a distribution point and asks his colleague, driver 3, to take over driving while he completes paperwork. When driver 3 sits in the driver's seat, the driver change detection module (145) detects the change in driver appearance and triggers a new identification process. The system identifies driver 3 and retrieves her language preference (Hindi) from the driver database (106). The in-cabin alert sub-system (131) switches to Hindi for all subsequent alerts, playing a confirmation message in “Hindi” (welcome, driver 3. The alert system is set to Hindi.)
[102] This scenario demonstrates how the personalized multilingual in-cabin alert system (100) seamlessly adapts to different drivers throughout the day, providing each with alerts in their preferred language. The system enhances safety and efficiency by ensuring that critical information is communicated in a language that each driver can readily understand, regardless of who operated the vehicle previously. The automatic identification and language switching occur without requiring manual intervention from the drivers, allowing them to focus on their primary task of operating the vehicle safely and efficiently.
[103] The Personalized multilingual in-cabin alert system provides several advantages and benefits that address limitations of conventional alert systems in vehicles. By delivering in-cabin alerts in the driver's preferred language, the system enhances communication effectiveness and improves overall safety.
[104] One advantage of the personalized multilingual in-cabin alert system is its ability to automatically identify drivers and retrieve their language preferences. This automation eliminates the need for manual language selection, reducing driver distraction and ensuring that alerts are consistently delivered in the appropriate language. The system's real-time adaptation to driver changes and engine status allows for seamless transitions between different drivers. This feature is particularly beneficial for fleet vehicles or shared cars, where multiple drivers may operate the vehicle in succession.
[105] By integrating facial recognition technology with language mapping, the personalized multilingual in-cabin alert system offers a more personalized and user-friendly experience compared to conventional alert systems. Drivers receive important information in a language they can readily understand, reducing the risk of miscommunication or misinterpretation of critical alerts. The system's ability to deliver alerts in multiple languages addresses the needs of diverse driver populations. This multilingual support is especially valuable in regions with high linguistic diversity or for international transportation services.
[106] Another benefit of the personalized multilingual in-cabin alert system is its potential to improve driver responsiveness to alerts. When drivers receive information in their preferred language, they may be more likely to comprehend and act upon the alerts promptly, potentially reducing reaction times in critical situations.
[107] The integration with vehicle telematics and GPS systems allows for context-aware alerts. This feature enables the system to provide relevant information based on the vehicle's location, speed, and other parameters, enhancing the overall driving experience and safety.
[108] The web-based driver management interface facilitates easy updates to driver profiles and language preferences. This flexibility allows for efficient management of driver information, ensuring that the system remains up-to-date with any changes in language preferences or the addition of new drivers.
[109] The technical effect of the personalized multilingual in-cabin alert system is the provision of real-time, language-specific alerts tailored to individual drivers. This personalization enhances communication clarity, reduces language barriers, and improves overall safety in vehicle operations.
[110] Features of any of the examples or embodiments outlined above may be combined to create additional examples or embodiments without losing the intended effect. It should be understood that the description of an embodiment or example provided above is by way of example only, and various modifications could be made by one skilled in the art. Furthermore, one skilled in the art will recognize that numerous further modifications and combinations of various aspects are possible. Accordingly, the described aspects are intended to encompass all such alterations, modifications, and variations that fall within the scope of the appended claims.
LIST OF REFERENCE NUMERALS
100 Personalized Multilingual In-Cabin Alert System
101 Driver Management Unit
103 Web Interface
106 Driver Database
111 Driver Identification Unit
113 Face Analysis Module
115 Capturing Device
121 Language Mapping Module
131 In-Cabin Alert Sub-System
133 Text-To-Speech Module
135 In-Vehicle Audio Playback Device
141 Real-Time Integration Sub-System
143 Engine Status Module
145 Driver Change Detection Module
200-202 Steps
204 Decision
206 Step
208 Decision
210-216 Steps
300 Method
302-322 Steps
, Claims:We Claim,
1. A method (300) for providing personalized multilingual in-cabin alerts, the method comprising:
(a) capturing (304), by a capturing device (115), an image of a driver of the vehicle after driver turn on the engine;
(b) analyzing (306), by a face analysis module (113), the captured image to identify the driver;
(c) retrieving (314), by a language mapping module (121), a language preference associated with the identified driver from a driver database (106);
(d) setting (316), by an in-cabin alert sub-system (131), an alert system language according to the retrieved language preference;
(e) delivering, by the in-cabin alert sub-system (131), in-cabin alerts in the set alert system language;
(f) monitoring (318), by an engine status module (143), an engine status of the vehicle.
2. The method as claimed in claim 2, wherein the method further comprises:
monitoring (310), by a driver change detection module (145), for changes in the driver of the vehicle;
detecting (312), by the driver change detection module (145), a change in the driver; and
repeating steps (a) to (f) for the newly detected driver.
3. The method as claimed in claim 4, wherein the method further comprises:
detecting (320), by the engine status module (143), a restart of the engine; and
repeating steps (a) to (e) upon detection of the engine restart.
4. The method as claimed in claim 1, wherein delivering the in-cabin alerts comprises:
generating, by a text-to-speech module (133), audio alerts based on the set alert system language,
playing, by an in-vehicle audio playback device (135), the generated audio alerts.
5. The method as claimed in claim 1, wherein the method further comprises:
registering (202), via a web interface (103), driver information including the language preference in the driver database (106).
6. A personalized multilingual in-cabin alert system (100) for vehicles, the system comprising:
(a) a driver identification unit (111) configured to identify a driver of the vehicle, the driver identification unit (111) comprising:
(i) a capturing device (115) configured to capture an image of the driver, and
(ii) a face analysis module (113) configured to analyze the captured image to identify the driver;
(b) a language mapping module (121) configured to retrieve a language preference associated with the identified driver from a driver database (106);
(c) an in-cabin alert sub-system (131) comprises:
(i) a text-to-speech module (133) configured to generate audio alerts based on the set alert system language, and
(ii) an in-vehicle audio playback device (135) configured to play the generated audio alerts.
(d) a real-time integration sub-system (141) configured to monitor for changes in the driver and engine status of the vehicle, a real-time integration sub-system (141) comprising:
(i) a driver change detection module (145) configured to detect changes in the driver of the vehicle.
(ii) an engine status module (143) configured to monitor the engine status of the vehicle.
7. The system as claimed in claim 9, further comprising:
a driver management unit (101) comprising a web interface (103) configured to register driver information including the language preference in the driver database (106).
| # | Name | Date |
|---|---|---|
| 1 | 202521081779-STATEMENT OF UNDERTAKING (FORM 3) [28-08-2025(online)].pdf | 2025-08-28 |
| 2 | 202521081779-POWER OF AUTHORITY [28-08-2025(online)].pdf | 2025-08-28 |
| 3 | 202521081779-FORM FOR STARTUP [28-08-2025(online)].pdf | 2025-08-28 |
| 4 | 202521081779-FORM FOR SMALL ENTITY(FORM-28) [28-08-2025(online)].pdf | 2025-08-28 |
| 5 | 202521081779-FORM 1 [28-08-2025(online)].pdf | 2025-08-28 |
| 6 | 202521081779-FIGURE OF ABSTRACT [28-08-2025(online)].pdf | 2025-08-28 |
| 7 | 202521081779-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [28-08-2025(online)].pdf | 2025-08-28 |
| 8 | 202521081779-DRAWINGS [28-08-2025(online)].pdf | 2025-08-28 |
| 9 | 202521081779-DECLARATION OF INVENTORSHIP (FORM 5) [28-08-2025(online)].pdf | 2025-08-28 |
| 10 | 202521081779-COMPLETE SPECIFICATION [28-08-2025(online)].pdf | 2025-08-28 |
| 11 | 202521081779-FORM-9 [29-08-2025(online)].pdf | 2025-08-29 |
| 12 | 202521081779-STARTUP [30-08-2025(online)].pdf | 2025-08-30 |
| 13 | 202521081779-FORM28 [30-08-2025(online)].pdf | 2025-08-30 |
| 14 | 202521081779-FORM 18A [30-08-2025(online)].pdf | 2025-08-30 |
| 15 | Abstract.jpg | 2025-09-09 |
| 16 | 202521081779-Proof of Right [12-11-2025(online)].pdf | 2025-11-12 |