Abstract: Disclosed is a vehicular system (100) for adaptive analysis of vehicular environment through edge computing. The system includes an imaging device (102) for capturing visual data, an edge device (104) with a processing unit (110), and an edge server (106) with processing circuitry (114). The processing unit (110) is configured to receive image frames, perform preprocessing, execute a pipelined architecture for lane detection and obstacle avoidance, and generate real-time control signals. The processing circuitry (114) is configured to receive data from the edge device (104), perform digital twin rendering, synchronize digital twin data across multiple edge servers, execute distributed task scheduling, adaptively trigger retraining of a quantized neural network for lane detection and a MobileNet-based model for obstacle detection, and adaptive model training based on environmental conditions. FIG. 1 is selected
Description:FIELD OF DISCLOSURE
The present disclosure relates to vehicular vision systems and edge computing, and more particularly to vehicular system and method for adaptive vision processing and edge computing.
BACKGROUND
The field of vehicular systems and intelligent transportation has seen significant advancements in recent years, driven by the increasing demand for safer, more efficient, and autonomous driving experiences. These systems incorporate various technologies including computer vision, edge computing, and artificial intelligence to enhance vehicle performance and decision-making capabilities.
Conventional vehicular systems often rely on centralized processing architectures, where data from multiple sensors is transmitted to a central unit for analysis and decision-making. This approach can lead to increased latency and reduced real-time responsiveness, particularly in scenarios requiring rapid reactions to changing road conditions or potential hazards. Additionally, many existing systems struggle to maintain consistent performance across diverse environmental conditions, such as low-light situations or adverse weather.
Current technologies for lane detection and obstacle avoidance frequently utilize separate processing pipelines, potentially leading to inefficiencies in computational resource utilization and increased power consumption. Furthermore, the integration of edge computing capabilities in vehicular systems has been limited, often resulting in suboptimal distribution of computational tasks between on-board devices and external servers. This can impact the system's ability to adapt to varying computational demands and network conditions.
Therefore, there exists a need for a technical solution that addresses the aforementioned challenges of conventional systems and methods for vehicular vision processing.
SUMMARY
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In an aspect of the present disclosure, a vehicular system for adaptive vision processing and edge computing is disclosed. The system includes an imaging device configured to capture visual data in low-light conditions. An edge device that includes a processing unit configured to receive image frames from the imaging device, perform multi-modal preprocessing on the received image frames, execute a pipelined architecture for simultaneous lane detection and obstacle avoidance, adaptively trigger retraining of a deep neural network based on detected accuracy drops below a predetermined threshold, and generate real-time control signals for vehicle maneuvering based on the detected lanes and obstacles. An edge server that includes processing circuitry configured to receive data from the edge device, perform digital twin rendering of real-time vehicle movement using a physics-based simulation model, synchronize the digital twin data across multiple edge servers, execute distributed task scheduling for incoming compute-intensive tasks using a priority-based load balancing algorithm, and perform adaptive model training based on environmental conditions.
In some aspects of the present disclosure, the quantized neural network for lane detection utilizes 8-bit weight precision, and the MobileNet-based model for obstacle detection is a MobileNet-V2 model.
In some aspects of the present disclosure, the noise reduction is performed using a bilateral filter, and the contrast enhancement is performed using adaptive histogram equalization.
In some aspects of the present disclosure, the predetermined threshold for triggering retraining of the deep neural network is a 95% confidence threshold.
In some aspects of the present disclosure, the edge device includes a Jetson Nano for on-board processing and the edge server includes a Jetson Orin for distributed computing.
In an aspect of the present disclosure, a method for adaptive vision processing and edge computing in a vehicular system is disclosed. The method includes capturing visual data in low-light conditions using an imaging device. At an edge device, the method includes receiving image frames from the imaging device, performing multi-modal preprocessing on the received image frames, executing a pipelined architecture for simultaneous lane detection and obstacle avoidance, adaptively triggering retraining of a deep neural network based on detected accuracy drops below a 95% confidence threshold, and generating real-time control signals for vehicle maneuvering using a model predictive control algorithm. At an edge server, the method includes receiving data from the edge device via a communication network using a low-latency protocol, performing digital twin rendering of real-time vehicle movement using a physics-based simulation model with sub-centimeter accuracy, synchronizing the digital twin data across multiple edge servers, executing distributed task scheduling for incoming compute-intensive tasks using a priority-based load balancing algorithm with dynamic resource allocation, and performing adaptive model training based on environmental conditions.
In some aspects of the present disclosure, the low-latency protocol for receiving data from the edge device is a User Datagram Protocol (UDP) with forward error correction.
In some aspects of the present disclosure, the method further includes managing dynamic handover of computational tasks between edge servers based on vehicle mobility and network conditions.
In some aspects of the present disclosure, the model predictive control algorithm for generating real-time control signals incorporates constraints on vehicle dynamics and road conditions.
In some aspects of the present disclosure, the imaging device includes a Waveshare IMX219 8MP 160° FOV Camera without night vision support, and the method further includes applying software-based low-light enhancement techniques to the captured visual data.
The foregoing general description of the illustrative aspects and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
BRIEF DESCRIPTION OF FIGURES
The following detailed description of the preferred aspects of the present disclosure will be better understood when read in conjunction with the appended drawings. The present disclosure is illustrated by way of example, and not limited by the accompanying figures, in which like references indicate similar elements.
FIG. 1 illustrates a block diagram of a vehicular system for processing image data, according to aspects of the present disclosure;
FIG. 2 illustrates a block diagram of a processing unit of the vehicular system, according to aspects of the present disclosure;
FIG. 3 illustrates a block diagram of an edge server of the vehicular system, according to aspects of the present disclosure; and
FIG. 4 illustrates a flowchart of a method for adaptive vision processing and edge computing in a vehicular system, according to aspects of the present disclosure.
DETAILED DESCRIPTION
The following description sets forth exemplary aspects of the present disclosure. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure. Rather, the description also encompasses combinations and modifications to those exemplary aspects described herein.
The present disclosure relates to vehicular vision systems and edge computing, specifically to a system and method for adaptive vision processing in low-light conditions. The disclosure provides an innovative approach to real-time lane detection, obstacle avoidance, and vehicle control using a combination of edge devices and edge servers. By leveraging advanced preprocessing techniques, pipelined architectures, and adaptive neural network training, the system enhances vehicular safety and performance in challenging environmental conditions.
The system incorporates multi-modal preprocessing of image frames, utilizing techniques such as noise reduction and contrast enhancement to improve visual data quality. A pipelined architecture enables simultaneous lane detection and obstacle avoidance, employing quantized neural networks and MobileNet-based models for efficient processing. The system's adaptive nature allows for real-time retraining of deep neural networks based on performance metrics, ensuring optimal accuracy in varying conditions.
Furthermore, the disclosure integrates edge computing capabilities, distributing computational tasks between on-board devices and edge servers. This approach enables advanced features such as digital twin rendering, distributed task scheduling, and adaptive model training based on environmental conditions. The use of distributed ledger technology for synchronizing digital twin data across multiple edge servers enhances system reliability and data integrity.
Key advantages of the disclosure include improved low-light performance, reduced latency in decision-making processes, and enhanced adaptability to diverse driving conditions. The system's modular architecture allows for easy integration with existing vehicular systems and provides a scalable solution for future autonomous driving technologies.
FIG. 1 illustrates a block diagram of a vehicular system for processing image data, according to aspects of the present disclosure. The vehicular system 100 may include an imaging device 102, an edge device 104, an edge server 106, and a communication network 108.
The imaging device 102 may be configured to capture visual data in low-light conditions. The imaging device 102 may be coupled to the edge device 104 and may be configured to transmit captured image frames to the edge device 104.
The edge device 104 may include a processing unit 110 and a communication interface 112. The processing unit 110 may be configured to receive image frames from the imaging device 102, perform multi-modal preprocessing on the received image frames, execute a pipelined architecture for simultaneous lane detection and obstacle avoidance, adaptively trigger retraining of a deep neural network, and generate real-time control signals for vehicle maneuvering.
The edge server 106 may include processing circuitry 114 and an edge database 116. The processing circuitry 114 may be configured to receive data from the edge device 104 via the communication network 108, perform digital twin rendering of real-time vehicle movement, synchronize digital twin data across multiple edge servers, execute distributed task scheduling for incoming compute-intensive tasks, and perform adaptive model training based on environmental conditions.
The communication network 108 may facilitate data exchange between the edge device 104 and the edge server 106. The communication network 108 may include suitable logic, circuitry, and interfaces that may be configured to provide a plurality of network ports and a plurality of communication channels for transmission and reception of data related to operations of various entities in the system 100. In some aspects, the communication network 108 may include technologies like IEEE 802.11ac (WiFi) 5GHz with RTSP server on the edge device for streaming the received image frames.
Examples of the imaging device 102 may include, but are not limited to, a camera, an infrared camera, a thermal imaging camera, a night vision camera, or the like. Aspects of the present disclosure are intended to include and/or otherwise cover any type of the imaging device 102 including known, related art, and/or later developed technologies.
Examples of the edge device 104 and the edge server 106 may include, but are not limited to, a Jetson Nano for on-board processing and a Jetson Orin for distributed computing, respectively. Aspects of the present disclosure are intended to include and/or otherwise cover any type of the edge device 104 and the edge server 106 including known, related art, and/or later developed technologies.
Although FIG. 1 illustrates that the system 100 includes a single edge device 104, it will be apparent to a person skilled in the art that the scope of the present disclosure is not limited to it. In various other aspects, the system 100 may include multiple edge devices without deviating from the scope of the present disclosure. In such a scenario, each edge device is configured to perform one or more operations in a manner similar to the operations of the edge device 104 as described herein.
In operation, the vehicular system 100 captures visual data in low-light conditions using the imaging device 102, processes the captured data at the edge device 104 for lane detection and obstacle avoidance, generates real-time control signals for vehicle maneuvering, and utilizes the edge server 106 for advanced computations and data synchronization across multiple vehicles.
FIG. 2 illustrates a block diagram of a processing unit for a vehicular vision system, according to an embodiment. The processing unit 110 may include a frame capture engine 200, a preprocessing engine 202, a lane detection engine 204, an obstacle detection engine 206, a decision-making engine 208, a control interface engine 210, and a communication bus 212.
The frame capture engine 200 may be configured to obtain image frames from the imaging device 102. The frame capture engine 200 may be coupled to the preprocessing engine 202 via the communication bus 212 and may be configured to transmit the obtained image frames to the preprocessing engine 202.
The preprocessing engine 202 may be configured to perform the preprocessing on the received image frames. In some aspects of the present disclosure, during pre-processing, the preprocessing engine 202 may be configured to filter out frames from the received image frames that are non-blurred and/or that are not degraded. In some aspects of the present disclosure, the preprocessing engine 202 may be a CNN-based fusion framework that is capable of detecting and eliminating hidden degradations in the received image frames. In some aspects of the present disclosure, the preprocessing engine 202 may have an architecture that may include a denoising module (not shown) and/or an enhancement module (not shown). The denoising module may be configured to suppress noise artifacts in the received image frames and may enhances perceptual quality of the received image frames. The enhancement module (202) may be configured to adjust and refine illumination conditions in the received image frames.
In some other aspects of the present disclosure, the pre-processing engine 202 may have any other architecture, without deviating from the scope of the present disclosure.
The preprocessing engine 202 may be coupled to the lane detection engine 204 and the obstacle detection engine 206 via the communication bus 212. The preprocessing engine 202 may be configured to transmit the preprocessed image frames to the lane detection engine 204 and the obstacle detection engine 206.
The lane detection engine 204 may be configured to analyze the preprocessed frames to detect lane markings and boundaries. The lane detection engine 204 may utilize a quantized neural network for lane detection. The lane detection engine 204 may be coupled to the decision making engine 208 via the communication bus 212 and may be configured to transmit the detected lane information to the decision making engine 208.
The obstacle detection engine 206 may be configured to identify potential obstacles in the vehicle's path. The obstacle detection engine 206 may utilize a MobileNet-based model for obstacle detection. The obstacle detection engine 206 may be coupled to the decision making engine 208 via the communication bus 212 and may be configured to transmit the identified obstacle information to the decision making engine 208.
In some aspect, the lane detection engine 204 and the obstacle detection engine 206 may include a fusion module (not shown) may include an image attention unit. The image attention unit may be configured to prioritize critical image regions of the image frames for fusion.
In another aspect, the lane detection engine 204 and the obstacle detection engine 206 may execute a pipelined architecture for simultaneous lane detection and obstacle avoidance such that the pipelined architecture may include the quantized neural network for lane detection and the MobileNet-based model for obstacle detection.
In some aspects of the present disclosure, the quantized neural network for lane detection may utilize 8-bit weight precision. In other aspects, the MobileNet-based model for obstacle detection may be a MobileNet-V2 model.
The decision making engine 208 may be configured to process inputs from the lane detection engine 204 and the obstacle detection engine 206 to determine appropriate vehicle responses. The decision making engine 208 may be coupled to the control interface engine 210 via the communication bus 212 and may be configured to transmit the determined vehicle responses to the control interface engine 210.
The control interface engine 210 may be configured to interface with vehicle control systems to implement the decisions made by the decision making engine 208. The control interface engine 210 may generate real-time control signals for vehicle maneuvering based on the determined vehicle responses.
The communication bus 212 may enable data exchange between all components within the processing unit 110, allowing coordinated operation of the various engines.
Although FIG. 2 illustrates that the processing unit 110 includes a single lane detection engine 204 and a single obstacle detection engine 206, it will be apparent to a person skilled in the art that the scope of the present disclosure is not limited to it. In various other aspects, the processing unit 110 may include multiple lane detection engines and multiple obstacle detection engines without deviating from the scope of the present disclosure.
FIG. 3 illustrates a block diagram of an edge server, according to aspects of the present disclosure. The edge server 106 may include a network interface 300, an input output interface 302, a data connection 304, an edge database 116, a task scheduling engine 306, a training engine 308, a digital twin engine 310, a first synchronization engine 312, a second synchronization engine 314, a third synchronization engine 316, and a communication bus 318.
The network interface 300 may be configured to establish and enable communication between the edge server 106 and different elements of the system 100, via the communication network 108. The network interface 300 may be implemented by use of various known technologies to support wired or wireless communication of the edge server 106 with the communication network 108.
The input output interface 302 may include suitable logic, circuitry, interfaces, and/or code that may be configured to receive inputs and transmit the edge server's outputs via a plurality of data ports in the edge server 106. The input output interface 302 may include various input and output data ports for different I/O devices.
The data connection 304 may connect the components to the edge database 116. The data connection 304 may facilitate data transfer between the processing circuitry 114 and the edge database 116.
The processing circuitry 114 may be configured to execute various operations associated with the edge server 106. The processing circuitry 114 may include the verification engine 306, the training engine 308, the digital twin engine 310, the first synchronization engine 312, the second synchronization engine 314, and the third synchronization engine 316.
The validation engine 306 may be configured to communicating with the pre-processing engine 202, the lane detection engine 204, the obstacle detection engine 206, and the training engine 308. The validation engine 306 may be configured to monitor the output from pre-processing engine 202 and the outputs from the lane detection engine 204 and the obstacle detection engine 206. Specifically, the validation engine 306 may be configured to check if the lane markings and the boundaries as well as the potential obstacles in the preprocessed image frames of the received image frames from the preprocessing engine 202 is accurately identified or detected by the lane detection engine 204 and the obstacle detection engine 206. The validation engine 306 may detect the accuracy drop in the lane detection engine 204 and the obstacle detection engine 206 by comparing the detected accuracy of the lane detection engine 204 and the obstacle detection engine 206 with a predetermined threshold. In a preferred aspect, the predetermined threshold may be confidence threshold. Upon detecting the accuracy drop in the lane detection engine 204 and the obstacle detection engine 206 preferably when the accuracy falls below a 95% confidence threshold, the validation engine 306 may be configured to trigger a signal to thethe training engine 308 for retraining of the lane detection engine 204 and the obstacle detection engine 206. The validation engine 306 may be coupled to the training engine 308 and the digital twin engine 310 via the communication bus 318.
The training engine 308 may be configured to perform adaptive model training of the lane detection engine 204 and the obstacle detection engine 206 based on the signal from the validation engine 306 and the environmental conditions, including dynamic adjustment of learning rates using cyclic learning rate scheduling and adaptive batch sizes based on available computational resources. The training engine 308 may be coupled to the digital twin engine 310 via the communication bus 318.
The digital twin engine 310 may be configured to perform digital twin rendering of real-time vehicle movement using a physics-based simulation model with sub-centimeter accuracy. In some aspects of the present disclosure, the physics-based simulation model may be based on parameters associated with the vehicle such as velocity, acceleration, turning angles, time logs, or the like. Aspects of the present disclosure are intended to include or otherwise may cover any other parameters known to a person skilled in the art, without deviating from the scope of the present disclosure.
In some aspects, while performing digital twin rendering, the digital twin engine 310 may be configured to receive image frames from the imaging device 102 in real time and renders the real time feed i.e image frames in real time to a user that may be a traffic administrator. In some aspects, upon receiving the rendered real time feed, the user may be capable to perform advance analytics with the help of a relevant model (not shown) that is executed by the digital twin engine 310. Examples of the relevant model may include but not limited to a vehicle counting model, a No license plate detection model, a malicious vehicle detection model. Aspects of the present disclosure are intended to include or otherwise cover any relevant model including known, related, or later developed model, without deviating from the scope of the present disclosure.
In some aspects of the present disclosure, the digital twin engine 310 may use techniques such as Simulation of Urban Mobility (SUMO) and MATLAB to perform the digital twin rendering. Aspects of the present disclosure are intended to include or otherwise cover any other technique for performing the digital twin rendering, known to a person skilled in the art without deviating from the scope of the present disclosure.
The digital twin engine 310 may be coupled to the first synchronization engine 312, the second synchronization engine 314, and the third synchronization engine 316 via the communication bus 318.
The first synchronization engine 312, the second synchronization engine 314, and the third synchronization engine 316 may be configured to synchronize the digital twin data across multiple edge servers using,. These synchronization engines may coordinate data flow between the various components of the vehicular system 100.
The communication bus 318 may facilitate data transfer between the validation engine 306, training engine 308, digital twin engine 310, and the three synchronization engines 312, 314, and 316.
The edge database 116 may be configured to store logic, instructions, circuitry, interfaces, and/or codes of the processing circuitry 114 to enable the processing circuitry 114 to execute the one or more operations associated with the edge server 106. The edge database 116 may be further configured to store therein, data associated with the edge server 106, and the like.
Examples of the edge database 116 may include but are not limited to, a Relational database, a NoSQL database, a Cloud database, an Object oriented database, and the like. Aspects of the present disclosure are intended to include or otherwise cover any type of the edge database 116 including known, related art, and/or later developed technologies.
Although FIG. 3 illustrates that the edge server 106 includes three synchronization engines, it will be apparent to a person skilled in the art that the scope of the present disclosure is not limited to it. In various other aspects, the edge server 106 may include a different number of synchronization engines without deviating from the scope of the present disclosure.
FIG. 4 illustrates a flowchart of a method for adaptive vision processing and edge computing in a vehicular system, in accordance with example embodiments. The method 400 includes steps 402 to 424.
At step 402, the system 100 captures visual data in low-light conditions using the imaging device 102. The imaging device 102 may be a camera specifically designed for low-light performance.
At step 404, the edge device 104 receives image frames from the imaging device 102. The frame capture engine 200 of the processing unit 110 may be responsible for obtaining these image frames.
At step 406, the edge device 104 performs preprocessing on the received image frames by way of the preprocessing engine 202 of the processing unit 110. In some aspects, the preprocessing may include noise reduction using a bilateral filter and contrast enhancement using adaptive histogram equalization.
At step 408, the edge device 104 executes a pipelined architecture for simultaneous lane detection and obstacle avoidance by way of the lane detection engine 204 and the obstacle detection engine 206 of the processing unit 110. This step may involve using a quantized neural network with 8-bit weight precision for lane detection and a MobileNet-V2 model for obstacle detection.
At step 410, the edge device 104 generates real-time control signals for vehicle maneuvering based on the detected lanes and obstacles by way of the control interface engine 210 of the processing unit 110. In some aspects, the control signals may be generated using a model predictive control algorithm that incorporates constraints on vehicle dynamics and road conditions.
At step 412, the edge server 106 receives data from the edge device 104 and the imaging device (102) by way of the processing circuitry 114. In some aspects, the data transfer may occur via a low-latency protocol such as User Datagram Protocol (UDP) with forward error correction.
At step 414, the edge server 106 performs digital twin rendering of real-time vehicle movement by way of the processing circuitry 114. This rendering may use a physics-based simulation model with sub-centimeter accuracy.
At step 416, the edge server 106 synchronizes the digital twin data across multiple edge servers by way of the processing circuitry 114..
At step 418, the edge server 106 may adaptively trigger retraining of the quantized neural network and the MobileNet-based model based on detected accuracy drops below the predetermined threshold by way of the processing circuitry 114..
At step 420, the edge server 106, by way of the processing circuitry 114, may performs adaptive model training of the quantized neural network and the MobileNet-based model based on environmental conditions, including dynamic adjustment of learning rates using cyclic learning rate scheduling and adaptive batch sizes based on available computational resources.
In some aspects of the present disclosure, the method 400 may further include managing dynamic handover of computational tasks between edge servers based on vehicle mobility and network conditions. This ensures continuous and efficient processing as the vehicle moves through different network coverage areas.
In some aspects, the predetermined threshold for triggering retraining of the quantized neural network and the MobileNet-based model may be a 95% confidence threshold.
In sone aspects, the edge device (104) may include a Jetson Nano for on-board processing and the edge server (106) may include a Jetson Orin for distributed computing.
Although the method 400 is described sequentially, it will be apparent to a person skilled in the art that some steps may be performed in parallel or in a different order without deviating from the scope of the present disclosure.
Thus, the system 100 and the method 400 provide several significant technical advantages. The multi-modal preprocessing with noise reduction and contrast enhancement improves image quality in low-light conditions, enhancing the system's ability to detect lanes and obstacles accurately. The pipelined architecture, utilizing a quantized neural network and MobileNet-based model, enables simultaneous lane detection and obstacle avoidance with improved efficiency and reduced computational overhead. The adaptive retraining mechanism ensures the system maintains high accuracy across varying environmental conditions. The integration of edge computing with digital twin rendering and distributed task scheduling allows for real-time processing and decision-making, reducing latency and improving overall system responsiveness. Furthermore, the use of distributed ledger technology for synchronizing digital twin data across multiple edge servers enhances data integrity and system reliability. These technical advancements collectively result in a more robust, efficient, and adaptable vehicular vision system capable of improved performance in diverse driving scenarios.
Aspects of the present disclosure are discussed here with reference to flowchart illustrations and block diagrams that depict methods, systems, and apparatus in accordance with various aspects of the present disclosure. Each block within these flowcharts and diagrams, as well as combinations of these blocks, can be executed by computer-readable program instructions. The various logical blocks, modules, circuits, and algorithm steps described in connection with the disclosed aspects may be implemented through electronic hardware, software, or a combination of both. To emphasize the interchangeability of hardware and software, the various components, blocks, modules, circuits, and steps are described generally in terms of their functionality. The decision to implement such functionality in hardware or software is dependent on the specific application and design constraints imposed on the overall system. Person having ordinary skill in the art can implement the described functionality in different ways depending on the particular application, without deviating from the scope of the present disclosure.
The flowcharts and block diagrams presented in the figures depict the architecture, functionality, and operation of potential implementations of systems, methods, and apparatus according to different aspects of the present disclosure. Each block in the flowcharts or diagrams may represent an engine, segment, or portion of instructions including one or more executable instructions to perform the specified logical function(s). In some alternative implementations, the order of functions within the blocks may differ from what is depicted. For instance, two blocks shown in sequence may be executed concurrently or in reverse order, depending on the required functionality. Each block, and combinations of blocks, can also be implemented using special-purpose hardware-based systems that perform the specified functions or tasks, or through a combination of specialized hardware and software instructions.
Although the preferred aspects have been detailed here, it should be apparent to those skilled in the relevant field that various modifications, additions, and substitutions can be made without departing from the scope of the disclosure. These variations are thus considered to be within the scope of the disclosure as defined in the following claims.
Features or functionalities described in certain example aspects may be combined and re-combined in or with other example aspects. Additionally, different aspects and elements of the disclosed example aspects may be similarly combined and re-combined. Further, some example aspects, individually or collectively, may form components of a larger system where other processes may take precedence or modify their application. Moreover, certain steps may be required before, after, or concurrently with the example aspects disclosed herein. It should be noted that any and all methods and processes disclosed herein can be performed in whole or in part by one or more entities or actors in any manner.
Although terms like "first," "second," etc., are used to describe various elements, components, regions, layers, and sections, these terms should not necessarily be interpreted as limiting. They are used solely to distinguish one element, component, region, layer, or section from another. For example, a "first" element discussed here could be referred to as a "second" element without departing from the teachings of the present disclosure.
The terminology used here is intended to describe specific example aspects and should not be considered as limiting the disclosure. The singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "includes," "comprising," and "including," as used herein, indicate the presence of stated features, steps, elements, or components, but do not exclude the presence or addition of other features, steps, elements, or components.
As used herein, the term "or" is intended to be inclusive, meaning that "X employs A or B" would be satisfied by X employing A, B, or both A and B. Unless specified otherwise or clearly understood from the context, this inclusive meaning applies to the term "or."
Unless otherwise defined, all terms used herein (including technical and scientific terms) have the same meaning as commonly understood by one of ordinary skill in the relevant art. Terms should be interpreted consistently with their common usage in the context of the relevant art and should not be construed in an idealized or overly formal sense unless expressly defined here.
The terms "about" and "substantially," as used herein, refer to a variation of plus or minus 10% from the nominal value. This variation is always included in any given measure.
In cases where other disclosures are incorporated by reference and there is a conflict with the present disclosure, the present disclosure takes precedence to the extent of the conflict, or to provide a broader disclosure or definition of terms. If two disclosures conflict, the later-dated disclosure will take precedence.
The use of examples or exemplary language (such as "for example") is intended to illustrate aspects of the disclosure and should not be seen as limiting the scope unless otherwise claimed. No language in the specification should be interpreted as implying that any non-claimed element is essential to the practice of the disclosure.
While many alterations and modifications of the present disclosure will likely become apparent to those skilled in the art after reading this description, the specific aspects shown and described by way of illustration are not intended to be limiting in any way.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the scope of the disclosure. Accordingly, other implementations are within the scope of the following claims. , Claims:1. A vehicular system (100) for adaptive analysis of vehicular environment through edge computing, the vehicular system (100) comprising:
an imaging device (102) configured to capture visual data;
an edge device (104) coupled to the imaging device (102), the edge device (104) comprising a processing unit (110), wherein the processing unit (110) is configured to:
receive image frames from the imaging device (102);
perform preprocessing on the received image frames, wherein the preprocessing comprising noise reduction and contrast enhancement;
execute a pipelined architecture for simultaneous lane detection and obstacle avoidance, wherein the pipelined architecture comprises a quantized neural network for lane detection and a MobileNet-based model for obstacle detection;
generate real-time control signals for vehicle maneuvering based on the detected lanes and obstacles;
an edge server (106) coupled to the edge device (104) and the imaging device (102), the edge server (106) comprising processing circuitry (114), wherein the processing circuitry (114) is configured to:
receive data from the edge device (104) and the imaging device (102);
perform digital twin rendering of real-time vehicle movement using a physics-based simulation model ;
synchronize the digital twin data across multiple edge servers;
adaptively trigger retraining of the quantized neural network and the MobileNet-based model based on detected accuracy drops below a predetermined threshold
perform adaptive model training of the quantized neural network and the MobileNet-based model based on environmental conditions, including dynamic adjustment of learning rates and batch sizes.
2. The vehicular system (100) as claimed in claim 1, wherein the quantized neural network for lane detection utilizes 8-bit weight precision, and the MobileNet-based model for obstacle detection is a MobileNet-V2 model.
3. The vehicular system (100) as claimed in claim 1, wherein the noise reduction is performed using a bilateral filter, and the contrast enhancement is performed using adaptive histogram equalization.
4. The vehicular system (100) as claimed in claim 1, wherein the predetermined threshold for triggering retraining of the quantized neural network and the MobileNet-based model is a 95% confidence threshold.
5. The vehicular system (100) as claimed in claim 1, wherein the edge device (104) comprises a Jetson Nano for on-board processing and the edge server (106) comprises a Jetson Orin for distributed computing.
6. A method for analysis of vehicular environment through edge computing in a vehicular system (100), comprising:
capturing visual data by way of an imaging device (102);
at an edge device (104):
receiving image frames from the imaging device (102) by way of a processing unit (110);
performing, by way of the processing unit (110), preprocessing on the received image frames, wherein the pre-processing comprising noise reduction and contrast enhancement;
executing, by way of the processing unit (110), a pipelined architecture for simultaneous lane detection and obstacle avoidance, wherein the pipelined architecture comprises a quantized neural network for lane detection and a MobileNet-based model for obstacle detection;
generating, by way of the processing unit (110), real-time control signals for vehicle maneuvering based on the detected lanes and obstacles;
at an edge server (106) that is coupled to the edge device (104) and the imaging device (102):
receiving data from the edge device (104) and the imaging device (102) by way of processing circuitry (114);
performing digital twin rendering of real-time vehicle movement using a physics-based simulation model by way of the processing circuitry (114);
synchronizing the digital twin data across multiple edge servers by way of the processing circuitry (114);
adaptively triggering retraining of the quantized neural network and the MobileNet-based model based on detected accuracy drops below a predetermined threshold by way of the processing circuitry (114);
performing adaptive model training of the quantized neural network and the MobileNet-based model based on environmental conditions, including dynamic adjustment of learning rates and batch sizes by way of the processing circuitry (114).
7. The method as claimed in claim 6, wherein the quantized neural network for lane detection utilizes 8-bit weight precision, and the MobileNet-based model for obstacle detection is a MobileNet-V2 model.
8. The method as claimed in claim 6, wherein the noise reduction is performed using a bilateral filter, and the contrast enhancement is performed using adaptive histogram equalization.
9. The method as claimed in claim 6, wherein the predetermined threshold for triggering retraining of the quantized neural network and the MobileNet-based model is a 95% confidence threshold.
10. The method as claimed in claim 6, wherein the edge device (104) comprises a Jetson Nano for on-board processing and the edge server (106) comprises a Jetson Orin for distributed computing.
| # | Name | Date |
|---|---|---|
| 1 | 202521027853-STATEMENT OF UNDERTAKING (FORM 3) [25-03-2025(online)].pdf | 2025-03-25 |
| 2 | 202521027853-FORM FOR SMALL ENTITY(FORM-28) [25-03-2025(online)].pdf | 2025-03-25 |
| 3 | 202521027853-FORM FOR SMALL ENTITY [25-03-2025(online)].pdf | 2025-03-25 |
| 4 | 202521027853-FORM 1 [25-03-2025(online)].pdf | 2025-03-25 |
| 5 | 202521027853-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [25-03-2025(online)].pdf | 2025-03-25 |
| 6 | 202521027853-EVIDENCE FOR REGISTRATION UNDER SSI [25-03-2025(online)].pdf | 2025-03-25 |
| 7 | 202521027853-DRAWINGS [25-03-2025(online)].pdf | 2025-03-25 |
| 8 | 202521027853-DECLARATION OF INVENTORSHIP (FORM 5) [25-03-2025(online)].pdf | 2025-03-25 |
| 9 | 202521027853-COMPLETE SPECIFICATION [25-03-2025(online)].pdf | 2025-03-25 |
| 10 | Abstract.jpg | 2025-05-23 |
| 11 | 202521027853-FORM-26 [29-05-2025(online)].pdf | 2025-05-29 |
| 12 | 202521027853-Proof of Right [01-08-2025(online)].pdf | 2025-08-01 |
| 13 | 202521027853-FORM-9 [08-08-2025(online)].pdf | 2025-08-08 |
| 14 | 202521027853-FORM 18 [08-08-2025(online)].pdf | 2025-08-08 |