Abstract: A REAL-TIME AUTOMOTIVE ASSISTANCE SYSTEM AND METHOD FOR LANE ROI DETECTION IN VEHICLE The present disclosure provides a real-time automotive assistance system for lane region of interest detection in vehicle. The system includes capturing module (102) having road-facing camera (104) and a driver-facing camera (106) for capturing video data of driving environment and storing existing and recent video data in storage medium (108). A frame generation module (112) processes video data into individual frames, and lane identification segmentation module (114) identifies lane markers within frames. An evaluation module (116) evaluates frames based on predetermined criteria including lane visibility, lane orientation, vehicle positioning, lane straightness, lighting conditions and traffic consistency to determine valid frames for region of interest calculation. The evaluation module (116) computes consistent lane region of interest from valid frames and generates lane region of interest output (118). Alert generation component (120) receives lane region of interest output (118) and generates real-time alerts for vehicle based on computed lane region of interest.
Description:
FORM 2
THE PATENTS ACT 1970
(39 of 1970)
&
The Patents Rules, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION: A REAL-TIME AUTOMOTIVE ASSISTANCE SYSTEM AND METHOD FOR LANE ROI DETECTION IN VEHICLE
2. APPLICANT:
(a) NAME : Nervanik AI Labs Pvt. Ltd.
(b) NATIONALITY : Indian
(c) ADDRESS : A – 1111, World Trade Tower,
Off. S G Road, B/H Skoda Showroom,
Makarba, Ahmedabad 380 051
Gujarat INDIA
3. PREAMBLE TO THE DESCRIPTION
PROVISIONAL
The following specification describes the invention. þ COMPLETE
The following specification particularly describes the invention and the manner in which it is to be performed.
FIELD OF INVENTION
[1] The present invention relates to automotive assistance safety systems, and more particularly to real-time lane detection and region of interest (ROI) alert systems for vehicles. The invention may encompass systems and methods for automatically calculating and validating lane boundaries to provide accurate drivable area identification and alert generation for enhanced vehicle safety across different vehicle types and configurations.
BACKGROUND OF INVENTION
[2] Advanced Driver Assistance Systems (ADAS) have become integral components of modern vehicular technology, designed to enhance driver safety and improve the overall driving experience. These systems rely on various sensors, cameras, and computational modules to monitor the vehicle's surroundings and provide assistance to drivers in real-time. A fundamental aspect of ADAS functionality involves the accurate detection and monitoring of driving lanes to enable features such as lane departure warnings, collision avoidance, and automated steering assistance. Lane detection systems typically employ computer vision techniques and automotive assistance modules to process video data captured by vehicle-mounted cameras, identifying lane boundaries and calculating the drivable area within which the vehicle operates.
[3] Traditional lane detection systems face challenges in accurately identifying the Region of Interest (ROI) that defines the drivable area for a vehicle. These systems often generate false alerts due to imprecise lane boundary detection, particularly in varying environmental conditions such as poor lighting, weather interference, or complex road geometries. The accuracy of lane detection can be compromised by factors including blurred video frames, curved road sections, multiple vehicles obscuring lane markers, and inconsistent lane visibility. Additionally, existing systems may not adequately account for different vehicle types and their varying dimensional requirements, leading to inappropriate ROI calculations that do not reflect the actual operational needs of passenger cars, commercial trucks, or buses.
[4] Another problem with conventional ADAS systems relates to the validation and configuration processes. Many systems require manual validation procedures that may not reflect current environmental conditions or device capabilities. The lack of automatic validation mechanisms can result in outdated ROI calculations that do not correspond to the vehicle's actual operating environment. Furthermore, the synchronization of configuration parameters across different devices and the integration of updated validation data into existing systems presents technical challenges that can affect system reliability and performance.
[5] Current lane detection and ROI alert systems in the automotive industry suffer from several technical limitations that compromise their effectiveness and reliability. Existing systems frequently produce inaccurate lane boundary calculations due to their inability to properly process degraded video input, such as frames affected by poor weather conditions, insufficient lighting, or motion blur. These systems may also fail to maintain consistent ROI calculations when encountering complex road geometries, including curved sections, merging lanes, or construction zones where lane markings may be temporarily obscured or modified. Furthermore, conventional systems typically employ static ROI parameters that do not adapt to different vehicle types, resulting in inappropriate alert thresholds for commercial trucks, buses, or passenger vehicles with varying dimensional characteristics. The lack of real-time validation mechanisms in existing systems means that erroneous lane detection data can propagate through the system, leading to false alerts that may reduce driver confidence in the technology or cause unnecessary interventions that could potentially create safety hazards.
[6] The Korean patent KR101738425B1 discloses a lane detection system that relies on static image processing techniques and predetermined threshold values for lane boundary identification, which may limit its adaptability to varying environmental conditions and vehicle specifications. Unlike the present invention, KR101738425B1 does not provide dynamic ROI calculation capabilities that can adjust in real-time based on vehicle type, environmental factors, or frame quality validation. The Korean patent's approach may be constrained by its dependency on fixed computational parameters that cannot accommodate the dimensional differences between passenger cars, commercial trucks, and buses, potentially resulting in suboptimal ROI determinations for different vehicle categories. Additionally, KR101738425B1 may lack the comprehensive validation mechanisms described in the present invention, which include real-time frame quality assessment, data deviation analysis, and automatic parameter adjustment based on current operating conditions. The present invention's multi-module architecture with dedicated components for capturing, validation, frame generation, lane identification segmentation, and evaluation provides enhanced flexibility and accuracy compared to the more rigid processing framework that may be employed in KR101738425B1.
[7] The United States patent application US20200117921A1 discloses a lane detection system that may be limited by its reliance on conventional image processing techniques that do not incorporate the advanced validation and vehicle-specific adaptation capabilities of the present invention. Unlike the present invention, US20200117921A1 may not provide comprehensive real-time frame quality assessment mechanisms that can dynamically evaluate video input degradation and automatically adjust processing parameters based on environmental conditions such as lighting variations, weather interference, or motion blur. The system described in US20200117921A1 may lack the multi-module architecture with dedicated components for data deviation analysis, frame generation validation, and vehicle-type-specific ROI calculations that characterize the present invention. Additionally, US20200117921A1 may not offer the sophisticated evaluation capabilities that enable automatic differentiation between passenger cars, commercial trucks, and buses, potentially resulting in generic ROI calculations that do not account for the dimensional and operational requirements of different vehicle categories. The present invention's integrated approach combining capturing modules, validation systems, lane identification segmentation, and evaluation components with real-time storage medium updates may provide enhanced accuracy and reliability compared to the more conventional processing framework that may be employed in US20200117921A1.
[8] There is a pressing need for an advanced lane ROI alert system that can dynamically adapt to varying environmental conditions and vehicle specifications while maintaining high accuracy in lane boundary detection and ROI calculation. Such a system may provide real-time validation of video frame quality, implement vehicle-specific ROI criteria, and offer robust performance across different lighting conditions, weather scenarios, and road configurations. The automotive industry requires a solution that can minimize false alerts while ensuring reliable detection of actual lane departure situations, thereby enhancing driver safety and maintaining user trust in ADAS technology. Additionally, there is a need for automated validation capabilities that can adjust system parameters based on current operating conditions without requiring manual intervention, ensuring optimal performance throughout the vehicle's operational lifecycle.
[9] It has been appreciated that a system is needed that overcomes one or more of these problems.
SUMMARY OF INVENTION
[10] The present invention provides a real-time automotive assistance system and method for lane Region of Interest (ROI) detection. The system incorporates a multi-module architecture featuring capturing modules with road-facing and driver-facing cameras, data deviation and validation modules, frame generation components, lane identification segmentation modules, and evaluation systems that work together to provide accurate lane boundary detection and ROI calculations. The invention may dynamically adapt to varying environmental conditions and vehicle specifications, implementing vehicle-specific ROI criteria for passenger cars, commercial trucks, and buses while providing real-time validation of video frame quality to minimize false alerts and enhance driver safety. The invention dynamically adjusts to environmental conditions including lighting variations, weather interference, and motion blur, while offering automated validation capabilities that adjust system parameters based on current operating conditions. The system integrates road-facing and driver-facing cameras for comprehensive data capture, with storage medium updates containing recent ROI data for continuous system optimization. An alert generation component provides driver notification based on lane ROI analysis, enabling enhanced accuracy in lane boundary detection across different road geometries and conditions.
OBJECT OF THE INVENTION
[11] The primary object of the present invention is to provide a real-time automotive assistance system that accurately calculates lane Region of Interest (ROI) parameters through automated frame validation and vehicle-specific criteria to minimize false alerts and enhance driver safety.
[12] Another object of the present invention is to provide a multi-module architecture that systematically processes video data through capturing, validation, frame generation, lane identification segmentation, and evaluation components to ensure reliable lane boundary detection across varying environmental conditions including poor lighting, weather interference, and complex road geometries.
[13] A further object of the present invention is to provide vehicle-specific ROI calculation capabilities that automatically adapt alert boundaries based on identified vehicle type, applying precision-focused parameters for passenger cars, stability-oriented configurations for commercial trucks, and passenger safety-prioritized settings for buses to eliminate inappropriate alert generation.
[14] Yet another object of the present invention is to provide automated validation and configuration synchronization processes that activate upon device reboot and initial vehicle operation to maintain current environmental adaptation and ensure optimal system performance throughout the vehicle's operational lifecycle.
[15] An additional object of the present invention is to provide comprehensive frame validation mechanisms that systematically exclude unsuitable imagery based on lane visibility, orientation, vehicle positioning, straightness, lighting conditions, and traffic density to ensure that only appropriate frames contribute to accurate ROI calculations.
[16] Still another object of the present invention is to provide a dual-camera system with road-facing and driver-facing cameras integrated with dual storage architecture including onboard memory and cloud storage to ensure data redundancy and immediate access to recent video data for continuous lane ROI calculation and alert generation.
[17] A still further object of the present invention is to provide future compatibility with Level 2 and Level 3 autonomous driving implementations and Autonomous Emergency Braking systems that can utilize the validated lane region data for enhanced vehicle control, collision avoidance functionality, and advanced path planning in complex traffic scenarios.
BRIEF DESCRIPTION OF FIGURES
Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which:
[18] FIG. 1 illustrates a block diagram of a Lane ROI Alert System, according to aspects of the present disclosure.
[19] FIG. 2 illustrates a flowchart of a method for processing lane Region of Interest calculations in the Lane ROI Alert System of FIG. 1.
[20] FIG. 3 illustrates a flowchart of a method for determining vehicle-specific Region of Interest criteria, according to aspects of the present disclosure.
[21] Common reference numerals are used throughout the figures to indicate similar features.
DETAILED DESCRIPTION OF INVENTION
[22] Before explaining the present invention in detail, it is to be understood that the invention is not limited in its application. The nature of invention and the manner in which it is performed is clearly described in the specification. The invention has various components and they are clearly described in the following pages of the complete specification. It is to be understood that the phraseology and terminology employed herein is for the purpose of description and not of limitation.
[23] A Lane ROI Alert System refers to an automated vehicular safety system that processes video data to identify drivable lane areas and generate real-time alerts for automotive assistance systems. The Lane ROI Alert System operates by analyzing captured video footage to determine precise boundaries of vehicle travel lanes and provides alert notifications when objects or conditions are detected within these defined regions.
[24] A Capturing Module refers to a video acquisition component that records visual data from multiple camera perspectives within a vehicle environment. The Capturing Module includes camera hardware configured to capture both forward-facing road imagery and driver-monitoring footage for comprehensive situational awareness.
[25] A Data Deviation and Validation Module refers to a processing component that analyzes incoming video data to identify and filter unsuitable frames for lane region analysis. The Data Deviation and Validation Module applies predetermined criteria to exclude frames containing blurred imagery, night conditions, curved lanes, horizontal lane orientations, multi-lane vehicle positioning, or excessive traffic density that may compromise accurate lane boundary detection.
[26] A Frame Generation Module refers to a video processing component that converts continuous video streams into individual frame sequences for analysis. The Frame Generation Module extracts discrete image frames from stored video footage and prepares the frames for subsequent lane detection processing.
[27] A Lane Identification Segmentation Module refers to an image analysis component that applies computer vision techniques to detect and segment lane markers within processed video frames. The Lane Identification Segmentation Module utilizes machine learning models to identify lane boundaries, road markings, and drivable surface areas within each analyzed frame.
[28] An Evaluation Module refers to a computational component that aggregates lane detection results from validated frame to calculate a consistent Region of Interest (ROI) for vehicle operation. The Evaluation Module processes segmented lane data to determine optimal drivable area coordinates and generates standardized ROI parameters for integration with vehicle alert systems.
[29] An Alert Generation Component refers to a notification system that produces driver warnings based on processed lane ROI data and detected objects or conditions within the defined drivable areas. The Alert Generation Component interfaces with vehicle display systems or audio systems to deliver real-time safety notifications to vehicle operators.
[30] FIG. 1 illustrates a Lane ROI Alert System (100) that provides automated lane region identification and alert generation for vehicular safety applications. The Lane ROI Alert System (100) comprises multiple interconnected components that process video data to determine drivable lane areas and generate real-time notifications.
[31] A Capturing Module (102) forms the data acquisition component of the Lane ROI Alert System (100). The Capturing Module (102) includes a Road-facing Camera (104) and a Driver-facing Camera (106) that capture visual information from different perspectives within the vehicle environment. The Road-facing Camera (104) records forward-facing imagery of the driving environment, while the Driver-facing Camera (106) monitors driver behavior within the vehicle cabin. The dual-camera configuration enables comprehensive monitoring by combining external environmental data with driver behavior analysis.
[32] The Capturing Module (102) connects to a Storage Medium (108) that receives and stores captured video data from both cameras. The Storage Medium (108) includes both onboard memory and cloud storage for redundancy, ensuring immediate access to recent video data while maintaining backup copies in remote storage locations. The dual storage approach provides data availability during device maintenance, rebooting, or initial vehicle operation.
[33] A Data Deviation and Validation Module (110) receives recent data from the Storage Medium (108) and analyzes incoming video information to identify frames suitable for lane region analysis. The Data Deviation and Validation Module (110) apply filtering criteria to exclude frames that may compromise accurate lane boundary detection.
[34] A Frame Generation Module (112) processes video data from the Storage Medium (108) and converts continuous video streams into individual frame sequences for analysis. The Frame Generation Module (112) provides recent frame information to both the Data Deviation and Validation Module (110) and subsequent processing components.
[35] A Lane Identification Segmentation Module (114) receives processed frame data from the Frame Generation Module (112) and applies computer vision techniques to detect lane markers within the analyzed frames. The Lane Identification Segmentation Module (114) utilizes machine learning models to identify lane boundaries and drivable surface areas.
[36] An Evaluation Module (116) receives lane identification data from the Lane Identification Segmentation Module (114) and aggregates lane detection results from validated frames to calculate consistent region of interest parameters. The Evaluation Module (116) contains a Lane ROI Output (118) that generates standardized coordinates defining the drivable area for vehicle operation.
[37] An Alert Generation Component (120) receives processed data from the Evaluation Module (116) and produces driver notifications based on the calculated lane ROI data and detected conditions within the defined drivable areas. The Alert Generation Component (120) interfaces with vehicle notification systems to deliver real-time safety alerts to vehicle operators.
[38] The Lane ROI Alert System (100) includes configuration synchronization to update device settings and parameters automatically based on latest validation data. The configuration synchronization process ensures that each device operates with current parameters, environmental settings, and vehicle-specific adjustments retrieved from storage medium.
[39] FIG. 2 illustrates a flowchart method for processing lane Region of Interest calculations within the Lane ROI Alert System (100). The method demonstrates a systematic approach for analyzing video data to generate accurate lane ROI parameters and driver alerts.
[40] The method begins at a step (200) where the process initiates lane ROI calculation. The method proceeds to a step (202) where video is captured by the Capturing Module (102) and lane ROI calculation commences. The captured video data includes forward-facing road imagery from the Road-facing Camera (104) and driver monitoring footage from the Driver-facing Camera (106).
[41] The method continues to a step (204) where the latest video footage is downloaded from the Storage Medium (108). The Storage Medium (108) provides access to both recently captured video data and archived footage stored in onboard memory and cloud storage locations.
[42] At a step (206), the Frame Generation Module (112) processes the downloaded video into individual frames for analysis. The Frame Generation Module (112) converts continuous video streams into discrete image sequences that enable frame-by-frame evaluation of lane conditions.
[43] The method then proceeds to a decision point (208) where the Data Deviation and Validation Module (110) evaluate whether each frame is valid for ROI calculation. The decision point (208) applies filtering criteria to determine frame suitability based on image quality and lane visibility conditions.
[44] When the frame is determined to be valid at the decision point (208), the method advances to a step (210) where the Lane Identification Segmentation Module (114) identifies frame characteristics. The step (210) includes analysis of lane markers, vehicle positioning, night view conditions, blurred vision detection, and traffic consistency evaluation. The Lane Identification Segmentation Module (114) applies computer vision techniques and machine learning models to detect lane boundaries and road markings within the validated frames.
[45] When the frame is determined to be invalid at the decision point (208), the method follows an alternative path to a step (212) where the frame is discarded. After the step (212), the method returns to the step (206) to process the next available frame, creating a feedback loop mechanism that ensures only suitable frames proceed through the analysis pipeline.
[46] Following the step (210), the method advances to a step (214) where the Evaluation Module (116) evaluates ROIs from the validated frames. The Evaluation Module (116) includes specific checks for lane visibility to ensure lane markers are visible and not blurred, lane orientation to filter out frames with horizontal lanes, vehicle positioning to confirm vehicles are within a single lane, lane straightness to avoid curved or turning lanes, night and blurred frame detection to exclude unsuitable lighting conditions, traffic consistency evaluation to avoid frames with multiple moving vehicles, and lane marker quality assessment to select frames with clear and well-defined lane markers.
[47] The method proceeds to a step (216) where the Evaluation Module (116) computes a consistent lane ROI from the aggregated data. The step (216) processes the segmented lane information from multiple validated frames to determine optimal drivable area parameters. The Lane ROI Output (118) generates the computed lane ROI, which is represented by four coordinates that define the drivable area boundaries for vehicle operation.
[48] At a step (218), the Storage Medium (108) is updated with the recent ROI data. The step (218) ensures that the calculated lane ROI parameters are stored for integration with the vehicle alert systems and future reference during system operation.
[49] The method continues to a step (220) where the computed ROI is integrated into the Lane ROI Alert System (100). The step (220) incorporates the lane ROI data into the overall system architecture for real-time alert generation and driver assistance functionality.
[50] The method concludes at a step (222) where the Alert Generation Component (120) generates an alert and provides notification to the driver. The step (222) utilizes the processed lane ROI data to produce driver warnings when objects or conditions are detected within the defined drivable areas, interfacing with vehicle display systems or audio systems to deliver real-time safety notifications.
[51] FIG. 3 illustrates a method (300) for determining and applying vehicle-specific Region of Interest criteria within the Lane ROI Alert System (100). The method (300) provides a systematic approach for identifying vehicle types and applying corresponding ROI parameters to optimize alert generation accuracy for different vehicle categories.
[52] The method (300) begins at a step (302) where the vehicle type is identified during initial system setup or device update procedures. The step (302) includes analysis of vehicle dimensions, weight specifications, and typical driving patterns to categorize the vehicle for appropriate ROI calculation parameters.
[53] The method (300) proceeds to a Decision 1 at a step (304) where the system determines whether the identified vehicle is a passenger car. The step (304) evaluates vehicle characteristics against passenger car specifications including size parameters, weight ranges, and operational requirements.
[54] When the vehicle is determined to be a passenger car at the step (304), the method (300) follows a first branch to a Step 1 at a step (306) where passenger car ROI criteria are applied. The step (306) configures ROI parameters with emphasis on precision and responsiveness, selecting ROI boundaries that accommodate tight maneuverability requirements and enable rapid alert generation for lane changes and obstacle detection scenarios.
[55] When the vehicle is determined not to be a passenger car at the step (304), the method (300) continues to a Decision 2 at a step (308) where the system evaluates whether the vehicle is a commercial truck. The step (308) analyzes vehicle specifications against commercial truck parameters including payload capacity, dimensional requirements, and operational characteristics.
[56] When the vehicle is identified as a commercial truck at the step (308), the method (300) proceeds to a Step 2 at a step (310) where commercial truck ROI criteria are applied. The step (310) configures ROI parameters with focus on stability and larger turning radii, selecting ROI boundaries that accommodate the vehicle size and weight for safe lane changes and turning maneuvers.
[57] When the vehicle is determined not to be a commercial truck at the step (308), the method (300) advances to a Step 3 at a step (312) where bus ROI criteria are applied. The step (312) configures ROI parameters prioritizing passenger safety and smooth driving experience, selecting ROI boundaries that provide adequate space for safe maneuvering in dense traffic conditions.
[58] Following the vehicle-specific criteria category at the step (306), the step (310), or the step (312), the method (300) continues to a Step 4 at a step (314) where optimal ROI size is calculated. The step (314) processes the applied vehicle-specific criteria to determine appropriate ROI dimensions that correspond to the identified vehicle type and operational requirements.
[59] The method (300) concludes at a Step 5 at a step (316) where the vehicle-specific ROI is updated in the Storage Medium (108). The step (316) stores the calculated ROI parameters for integration with the Lane ROI Alert System (100) and enables the system to generate alerts based on the vehicle-appropriate drivable area boundaries.
[60] The Evaluation Module (116) applies the vehicle-specific criteria determined through the method (300) when processing lane detection results. The Evaluation Module (116) utilizes the stored vehicle-specific ROI parameters to ensure that alert generation corresponds to the appropriate drivable area dimensions for passenger cars, commercial trucks, or buses, thereby reducing false alerts and enhancing driver confidence through vehicle-appropriate notification systems.
[61] Another embodiment of the present invention, Level 2 autonomous driving implementations may integrate the lane ROI data with vehicle control systems to provide steering assistance and lane keeping functionality. The system may calculate steering corrections based on vehicle position relative to the computed ROI boundaries and provide input to electronic power steering systems. Adaptive cruise control integration may utilize ROI data to identify safe following distances and adjust vehicle speed based on detected objects within the drivable area. Lane change assistance may analyze adjacent lane ROI calculations to determine safe opportunities for automated lane transitions.
[62] Another embodiment of the present invention, Level 3 autonomous driving configurations may extend the system capabilities to include decision-making processes for complex driving scenarios. Traffic light recognition may combine ROI data with intersection analysis to determine appropriate stopping positions and proceed decisions. Merge assistance functionality may evaluate ROI calculations from multiple lanes to identify safe gaps for highway merging maneuvers. Construction zone navigation may adapt ROI parameters to accommodate temporary lane configurations and modified traffic patterns.
[63] Emergency braking systems may utilize ROI data to distinguish between collision threats within the drivable area and objects outside the vehicle path. Collision prediction module may analyze object trajectories relative to the computed ROI to determine intervention necessity and timing. Graduated braking responses may apply different braking intensities based on threat proximity to ROI boundaries and collision probability calculations. False positive reduction may prevent unnecessary emergency braking events by filtering alerts for objects detected outside the validated drivable area.
[64] System architecture modifications may accommodate different vehicle platforms and operational environments. Motorcycle implementations may utilize compact camera arrangements and modified ROI calculations that account for narrower vehicle profiles and different lane positioning requirements. Heavy equipment configurations may incorporate additional sensors and extended ROI boundaries to accommodate larger vehicle dimensions and specialized operational patterns. Marine vessel adaptations may modify the system for waterway navigation using similar lane detection principles applied to shipping channels and navigational corridors.
[65] Environmental adaptation capabilities may adjust system parameters for different geographical regions and road infrastructure variations. Rural road configurations may modify lane detection modules to accommodate unpaved surfaces and less defined lane markings. Urban environment settings may enhance traffic density analysis and pedestrian detection within ROI calculations. Highway-specific configurations may optimize the system for high-speed operations and extended forward visibility requirements.
EXAMPLES OF THE INVENTION
[66] A real-time implementation of the Lane ROI Alert System demonstrates its practical functionality in actual driving conditions. During a highway commute in a commercial truck, the system actively processes environmental data through multiple stages while adapting to specific vehicle characteristics and road conditions.
[67] As the truck travels on Interstate 95 during moderate rainfall, the dual-camera system continuously captures video at 30 frames per second. The Road-facing Camera (104) records the wet highway surface with partially obscured lane markings, while the Driver-facing Camera (106) simultaneously monitors the driver's attention level. Both data streams are instantly stored in the onboard Storage Medium (108).
[68] The Frame Generation Module (112) extracts individual frames from the video feed in real-time, processing approximately 15-20 frames per second for analysis. The Data Deviation and Validation Module (110) immediately evaluates each frame, rejecting those affected by windshield wiper movement, water spray from passing vehicles, and momentary glare from oncoming headlights.
[69] When the truck approaches a construction zone with temporary lane shifts, the system detects the altered road configuration. Several frames are automatically discarded due to the presence of orange construction barrels partially obscuring lane markings. The validation process identifies frames with sufficient lane visibility despite the challenging conditions, allowing continued ROI calculation.
[70] As the driver initiates a lane change to pass a slower vehicle, the system temporarily suspends ROI alerts during the maneuver. The Lane Identification Segmentation Module (114) detects the truck's position spanning two lanes and flags these frames as invalid for ROI calculation. Once the lane change completes and the truck stabilizes in the new lane, valid frame processing resumes within 0.8 seconds.
[71] The Lane Identification Segmentation Module (114) processes the validated frames using its machine learning modules, accurately identifying the faded lane markings despite the wet road surface. The system distinguishes between actual lane boundaries and water tracks left by preceding vehicles, preventing false boundary detection that could trigger inappropriate alerts.
[72] The Evaluation Module (116) combines data from multiple validated frames, calculating a consistent ROI that accounts for the 8.5-foot width and 53-foot length of the commercial truck. The system applies commercial truck-specific parameters established during initial configuration, creating wider ROI boundaries than would be used for a passenger vehicle to accommodate the truck's larger turning radius and dimensional requirements.
[73] When the truck encounters a curved section of highway, the system maintains accurate ROI calculations by analyzing the visible portion of the curve and projecting expected lane boundaries. The Lane ROI Output (118) generates coordinates that follow the curve's trajectory, allowing the system to maintain alert functionality despite the changing road geometry.
[74] As traffic density an increase approaching an urban area, the system automatically adjusts its sensitivity thresholds. The commercial truck configuration applies extended forward monitoring distances of 150 feet compared to the 100-foot distance used for passenger vehicles, accounting for the truck's increased stopping distance at highway speeds.
[75] When a vehicle suddenly cuts in front of the truck with minimal clearance, the Alert Generation Component (120) immediately activates. The system detects the vehicle intrusion within the calculated ROI boundaries and triggers a three-stage alert: first an audible warning tone, followed by a visual alert on the dashboard display showing the vehicle position relative to lane boundaries, and finally haptic feedback through the steering wheel.
[76] As daylight conditions deteriorate during sunset, the system automatically compensates for changing light levels. The validation process adjusts its brightness thresholds to maintain effective lane detection despite reduced contrast between lane markings and road surfaces. When street lighting activates, the system seamlessly transitions to its low-light processing mode without interruption to ROI calculation.
[77] Throughout the journey, the Storage Medium (108) continuously updates with recent ROI calculations, maintaining a rolling 30-second history of validated lane boundaries. This data enables the system to reference recent valid ROI parameters during brief periods when current frame validation fails, such as when passing under bridges with momentary shadows or encountering road sections with worn lane markings.
[78] When the driver exits the highway onto a narrower urban street, the system detects the changed road width and automatically recalculates appropriate ROI boundaries. The truck-specific parameters adjust to the new environment, maintaining appropriate clearance monitoring while accounting for tighter turning requirements in the urban setting.
[79] This real-time example demonstrates how the Lane ROI Alert System continuously adapts to changing environmental conditions, vehicle-specific requirements, and varying road geometries while maintaining reliable alert generation throughout an actual journey. The system's ability to process complex driving scenarios in real-time with vehicle-appropriate parameters ensures accurate lane boundary detection and timely driver notifications across diverse operational conditions.
[80] Existing lane detection systems in Advanced Driver Assistance Systems face substantial technical limitations that compromise their effectiveness in real-world driving scenarios. Traditional lane detection approaches generate excessive false alerts due to inadequate frame validation processes that fail to filter unsuitable imagery containing blurred lane markers, curved road sections, night conditions, or multi-vehicle traffic scenarios. These systems lack vehicle-specific adaptation capabilities, applying uniform region of interest parameters across different vehicle types without accounting for dimensional differences between passenger cars, commercial trucks, and buses, resulting in inappropriate alert boundaries that either miss relevant hazards or trigger unnecessary warnings. Environmental variability further degrades system performance as conventional approaches cannot maintain consistent lane boundary detection across changing weather conditions, varying road surface types, or different lighting scenarios, leading to unreliable alert generation when drivers depend on system accuracy.
[81] The multi-module architecture addresses these technical challenges through integrated real-time validation, vehicle-specific region of interest criteria, and automated calibration capabilities that provide comprehensive lane detection accuracy. The system incorporates frame-by-frame validation processes that systematically exclude unsuitable imagery based on lane visibility, orientation, vehicle positioning, straightness, lighting conditions, and traffic density, ensuring that only appropriate frames contribute to region of interest calculations. Vehicle-specific region of interest criteria automatically adjust alert boundaries based on identified vehicle type, applying precision-focused parameters for passenger cars, stability-oriented configurations for commercial trucks, and passenger safety-prioritized settings for buses to eliminate inappropriate alert generation. An automatic calibration process activates upon every device reboot and during initial vehicle running to maintain current environmental adaptation and device configuration synchronization, while the system architecture supports future criterias including Level 2 and Level 3 autonomous driving implementations and Autonomous Emergency Braking systems that utilize the validated lane region data for enhanced vehicle control and collision avoidance functionality.
[82] The systems and methods described herein may be implemented in any form of computing or electronic device. The term "computer," as used herein, encompasses any device with processing capabilities sufficient to execute instructions. This includes, but is not limited to, personal computers, servers, mobile devices, personal digital assistants, and similar devices.
[83] Alternatively, or in addition, some or all of the described functionality may be implemented using hardware logic components. Examples include, but are not limited to, application-specific integrated circuits, system-on-a-chip systems, field-programmable gate arrays, application-specific standard products, and complex programmable logic devices. In some cases, software instructions may also be implemented in dedicated circuits, such as programmable logic arrays or digital signal processors.
[84] The computing device may operate as a standalone system or as part of a distributed system, where tasks are performed collectively by multiple devices connected via a network. Such devices may communicate over a network connection to perform the described functionality. For instance, software may be stored on a remote computer and accessed by a local device, which may download and execute portions of the software as needed. Similarly, some instructions may be processed locally, while others may execute on remote systems or networks. In some cases, the computing device may be remote and accessible via a communication interface. Storage of program instructions may also be distributed across a network or stored in a combination of local and remote locations. For example, software may reside on a remote computer and be accessed by a local terminal, or the system may execute some software locally while other components operate on remote servers.
[85] Features of any of the examples or embodiments outlined above may be combined to create additional examples or embodiments without losing the intended effect. It should be understood that the description of an embodiment or example provided above is by way of example only, and various modifications could be made by one skilled in the art. Furthermore, one skilled in the art will recognize that numerous further modifications and combinations of various aspects are possible. Accordingly, the described aspects are intended to encompass all such alterations, modifications, and variations that fall within the scope of the appended claims.
, Claims:We Claim:
1. A real-time automotive assistance system for lane region of interest detection in vehicle, the lane ROI alert system (100) comprising:
a. a capturing module (102) comprising at least one camera for capturing video data of a driving environment;
b. a storage medium (108) for storing the captured video data;
c. a frame generation module (112) for processing the video data into individual frames;
d. a data validation module (110) for evaluating the frames based on predetermined criteria to determine valid frames suitable for region of interest calculation;
e. a lane identification segmentation module (114) for identifying lane markers within the valid frames;
f. an evaluation module (116) for computing a lane region of interest from the valid frames; and
g. an alert generation component (120) for generating alerts based on the computed lane region of interest.
2. The system (100) as claimed in claim 1, wherein the capturing module (102) comprises a road-facing camera (104) and a driver-facing camera (106), wherein the road-facing camera (104) captures forward-facing imagery of the driving environment and the driver-facing camera (106) monitors driver behavior within a vehicle cabin.
3. The system (100) as claimed in claim 1, wherein the predetermined criteria comprise lane visibility criteria for ensuring lane markers are visible and not blurred, lane orientation criteria for filtering frames with horizontal lanes, vehicle positioning criteria for confirming vehicles are within a single lane, lane straightness criteria for avoiding curved lanes, lighting condition criteria for excluding night and blurred frames, and traffic consistency criteria for avoiding frames with multiple moving vehicles.
4. The system (100) as claimed in claim 1, wherein the storage medium (108) comprises onboard memory and cloud storage for providing data redundancy and immediate access to recent video data.
5. The system (100) as claimed in claim 1, wherein the evaluation module (116) computes the lane region of interest applying four coordinates that define drivable area boundaries for vehicle operation.
6. The system (100) as claimed in claim 1, wherein the evaluation module (116) executes vehicle-specific region of interest criteria based on an identified vehicle type.
7. The system (100) as claimed in claim 6, wherein the vehicle-specific region of interest criteria for passenger cars involve precision-focused parameters that accommodate tight maneuverability requirements and enable rapid alert generation.
8. The system (100) as claimed in claim 6, wherein the vehicle-specific region of interest criteria for commercial trucks comprise stability-oriented parameters that accommodate vehicle size and weight for safe lane changes and larger turning radii.
9. A method (200) for lane region of interest detection in vehicle, the method comprising:
a. capturing video data of a driving environment by capturing module (102);
b. processing the video data into individual frames through frame generation module (112);
c. evaluating the frames through data deviation and validation module (110) based on predetermined criteria to determine valid frames suitable for region of interest calculation;
d. implementing the predetermined criteria including lane visibility criteria for ensuring lane markers visible and not blurred, and lane orientation criteria for filtering frames with horizontal lanes by line identification segmentation module (114);
e. identifying lane markers within a valid frame;
f. computing a lane region of interest from the valid frame and identifying a vehicle type through evaluation module (116); and
g. executing vehicle-specific region of interest criteria based on the identified vehicle type through lane ROI output;
h. generating alerts through alert generation component (120) based on the computed lane region of interest.
10. The method as claimed in claim 9, wherein computing lane region of interest includes aggregating lane detection results from the valid frame to determine a consistent lane region of interest.
Dated this on 9th day of September 2025
| # | Name | Date |
|---|---|---|
| 1 | 202521085678-STATEMENT OF UNDERTAKING (FORM 3) [09-09-2025(online)].pdf | 2025-09-09 |
| 2 | 202521085678-POWER OF AUTHORITY [09-09-2025(online)].pdf | 2025-09-09 |
| 3 | 202521085678-FORM FOR STARTUP [09-09-2025(online)].pdf | 2025-09-09 |
| 4 | 202521085678-FORM FOR SMALL ENTITY(FORM-28) [09-09-2025(online)].pdf | 2025-09-09 |
| 5 | 202521085678-FORM 1 [09-09-2025(online)].pdf | 2025-09-09 |
| 6 | 202521085678-FIGURE OF ABSTRACT [09-09-2025(online)].pdf | 2025-09-09 |
| 7 | 202521085678-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [09-09-2025(online)].pdf | 2025-09-09 |
| 8 | 202521085678-EVIDENCE FOR REGISTRATION UNDER SSI [09-09-2025(online)].pdf | 2025-09-09 |
| 9 | 202521085678-DRAWINGS [09-09-2025(online)].pdf | 2025-09-09 |
| 10 | 202521085678-DECLARATION OF INVENTORSHIP (FORM 5) [09-09-2025(online)].pdf | 2025-09-09 |
| 11 | 202521085678-COMPLETE SPECIFICATION [09-09-2025(online)].pdf | 2025-09-09 |
| 12 | 202521085678-STARTUP [10-09-2025(online)].pdf | 2025-09-10 |
| 13 | 202521085678-FORM28 [10-09-2025(online)].pdf | 2025-09-10 |
| 14 | 202521085678-FORM-9 [10-09-2025(online)].pdf | 2025-09-10 |
| 15 | 202521085678-FORM 18A [10-09-2025(online)].pdf | 2025-09-10 |
| 16 | Abstract.jpg | 2025-09-17 |
| 17 | 202521085678-Proof of Right [23-09-2025(online)].pdf | 2025-09-23 |