Sign In to Follow Application
View All Documents & Correspondence

System And Method For Vehicle Navigation

Abstract: A method for navigation of a host vehicle (104) includes acquiring sensor signals including information related to a dynamic object (602D) located in the surroundings of the host vehicle (104). The acquired sensor signals are processed to determine state variables of the dynamic object (602D) and associated mean values. The mean values are compared with mean values of historic state variables to identify a designated cluster (402A). Further, future state variables of the dynamic object (602D) are determined based on a main explicit controller associated with the identified cluster (402A). The dynamic object (602D) is classified into a designated category, and a trajectory prediction system (110) configures allocation of a preferred portion of an associated computing power for faster prediction of the future trajectories of the dynamic object (602D). The host vehicle (104) is navigated in near real-time based on the future trajectories of the dynamic object (602D). [Figure 1].

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
23 November 2018
Publication Number
22/2020
Publication Type
INA
Invention Field
MECHANICAL ENGINEERING
Status
Email
shery.nair@tataelxsi.co.in
Parent Application
Patent Number
Legal Status
Grant Date
2023-08-02
Renewal Date

Applicants

TATA ELXSI LIMITED
TATA ELXSI LIMITED, ITPB Road, Whitefield, Bangalore, Karnataka, India, Pin Code–560 048.

Inventors

1. RAJESH KODURI
TATA ELXSI LIMITED, ITPB Road, Whitefield, Bangalore, Karnataka, India, Pin Code-560 048.
2. SIVAPRASAD NANDYALA
TATA ELXSI LIMITED, ITPB Road, Whitefield, Bangalore, Karnataka, India, Pin Code-560 048.
3. MITHUN BHASKAR MANALIKANDY
TATA ELXSI LIMITED, ITPB Road, Whitefield, Bangalore, Karnataka, India, Pin Code-560 048.

Specification

BACKGROUND [0001] Embodiments of the present specification relate generally to a vehicle navigation system. More particularly, the present specification relates to a navigation system and method that predicts future trajectories of dynamic objects using model predictive control systems. [0002] Autonomous vehicles are capable of sensing the surrounding environment using a perception system. The perception system of an autonomous host vehicle allows the host vehicle to perceive and identify stationary and dynamic objects in the surroundings using a wide variety of on-board sensors such as cameras, RADARs, LIDARs, etc. Examples of stationary objects include parked cars, trees, and buildings. Examples of dynamic objects include pedestrians, bicycles, cars, and other vehicles on the road. [0003] In addition to identifying stationary and dynamic objects in the surroundings, the perception system of the host vehicle needs to perform various functionalities essential for safe navigation of the host vehicle. For example, the perception system needs to keep tracking the dynamic objects in the surroundings, estimating current positions of the dynamic objects with respect to the host vehicle, and predicting future motion behaviors or trajectories of the dynamic objects continuously or at designated intervals in order to plan navigation path for the host vehicle. [0004] Existing perception systems used in the autonomous vehicles may have sufficient computing power to perform all of the previously noted functionalities in real-time in a test environment. However, in real-world scenarios, as the surrounding environment becomes complex, the number of dynamic objects to be tracked increases significantly. In such scenarios, existing perception systems may not have sufficient computing power to appropriately perform all perception related functionalities such as identifying, tracking, estimating current positions, and predicting future trajectories for multiple moving objects, simultaneously. As a result, predicted dynamic behavior of objects may not match real-world dynamic behavior of the objects, thereby leading to unsafe navigation of the autonomous vehicles. [0005] To mitigate this issue, the perception systems of the autonomous vehicles may be provided with high computing power capabilities, leading to higher associated costs. Alternatively, some of the existing perception systems model and predict future dynamics of moving objects in real-time by considering only a minimal set of sensor information and by ignoring certain other types of sensor information. This is because, consideration and processing of all sensor information in real-time may need more computing power, which may not be available with the existing perception systems that are configured to have limited computing power capabilities in order to save costs. [0006] For example, the perception systems may consider only past and present sensor information associated with the dynamic objects and may not consider corresponding future state information while predicting future trajectories of the dynamic objects. Hence, the perception systems that predict future trajectories of the dynamic objects without having key dynamic objects information may not provide appropriate outputs. Control actions taken by the autonomous vehicles based on such outputs may result in undesirable consequences such as rough and aggressive navigation of autonomous vehicles, collisions, etc. [0007] Hence, there is a need for a cost-effective, yet an improved navigation system and method for intelligently detecting, tracking, and predicting future trajectories of dynamic objects with limited computing power capabilities. BRIEF DESCRIPTION [0008] It is an objective of the present disclosure to provide a method for navigation of a host vehicle. The method includes acquiring sensor signals including information related to a dynamic object located in a surrounding environment of the host vehicle using one or more on-board sensors of a navigation system deployed in the host vehicle. The navigation system includes a database that stores information generated during off-line training of the navigation system. The information generated during the off-line training includes a plurality of generated clusters including corresponding data points and main explicit controllers associated with the generated clusters. The data points include mean values of historic state variables collected during generation of training data that are used for the off-line training of the navigation system. [0009] The acquired sensor signals are processed to determine state variables associated with the dynamic object and mean values of the state variables determined based on the sensor signals acquired over a designated period of time are computed. The computed mean values of the state variables associated with the dynamic object are compared with the mean values of the historic state variables associated with the data points that are pre-stored in the database. A designated cluster is identified from the database including a data point whose mean values of the historic state variables correspond to the computed mean values of the state variables associated with the dynamic object. [0010] Future state variables associated with the dynamic object are determined based on a main explicit controller associated with the identified cluster. The determined future state variables provide future trajectories of the dynamic object. The dynamic object is classified into a designated category according to a threat level posed by the dynamic object with respect to navigation of the host vehicle in near real-time based on the future state variables provided by the main explicit controller and one or more designated rules. A trajectory prediction system in the navigation system is configured to allocate a preferred portion of an associated computing power for faster prediction of the future trajectories of the dynamic object. The host vehicle is navigated in near real-time based on the future trajectories of the dynamic object. [0011] The determined state variables may include a relative distance, a relative speed, and a relative orientation between the host vehicle and the dynamic object. The designated category associated with the dynamic object may include a high threat object, a medium threat object, or a low threat object. The computing power allocated for predicting future trajectories of the high threat object may be greater than a respective computing power allocated for the medium threat object and the low threat object. Training data including motion patterns of a plurality of dynamic objects captured using on-board sensors of a vehicle. The off-line training of the navigation system includes obtaining the training data including motion patterns of a plurality of dynamic objects captured using on-board sensors of a data-capturing vehicle. The obtained training data may be used for the off-line training of the trajectory prediction system in the navigation system. [0012] Ordinary differential equations that model the motion patterns of the dynamic objects are generated. The motion patterns of the dynamic objects include corresponding historic state variables determined based on sensor signals acquired over a selected period of time using the on-board sensors associated with the data-capturing vehicle. Explicit controllers are generated for the generated ordinary differential equation models. Mean values of the historic state variables determined based on the sensor signals acquired over the selected period of time are determined. Random clusters of data points are generated in a mean 3-dimensional space based on the determined mean values of the historic state variables, the generated ordinary differential equation models, and the generated explicit controllers. Each of the randomly generated clusters includes a plurality of data points. [0013] Each data point in the randomly generated clusters corresponds to a dynamic object selected from the dynamic objects and includes a determined mean relative distance, a determined mean relative speed, and a determined mean relative orientation between the data-capturing vehicle and the dynamic object, an associated ODE model, and an associated explicit controller. Robustness gain margins of each of the explicit controllers in a designated cluster are determined against a subset of ODE models in the designated cluster. An explicit controller is identified from the explicit controllers including largest sets of robustness gain margins and stabilizes all ODE models in the designated cluster. The identified explicit controller and an ODE model associated with the identified explicit controller are assigned as a main explicit controller and a main ODE model, respectively of the designated cluster. [0014] A respective threat level is assigned to each of the data points in the mean 3-dimensional space based on one or more designated rules and associated mean values of the state variables. The information generated off-line is stored in the database including the generated clusters as polyhedral sets, the main explicit controllers associated with the generated clusters, and threat levels associated with the data points. [0015] It is another objective of the present disclosure to provide a system for navigation of a host vehicle including one or more on-board sensors. The system includes a trajectory prediction system in operative communication with a control system. The system further includes a database that is associated with the trajectory prediction system and that stores information generated during off-line training of the trajectory prediction system. The information generated during the off-line training includes a plurality of generated clusters including corresponding data points and main explicit controllers associated with the generated clusters. The data points include mean values of historic state variables collected during generation of training data that are used for the off-line training of the system. [0016] The trajectory prediction system is configured to process sensor signals acquired using the on-board sensors to determine state variables associated with a dynamic object present in the surrounding environment of the host vehicle. Moreover, the trajectory prediction system computes mean values of the state variables determined based on the sensor signals acquired over a designated period of time. Further, the trajectory prediction system compares the computed mean values of the state variables associated with the dynamic object with the mean values of the historic state variables associated with the data points that are pre-stored in the database. Further, the trajectory prediction system identifies a designated cluster from the database including a data point whose mean values of the historic state variables correspond to the computed mean values of the state variables associated with the dynamic object. [0017] Furthermore, the trajectory prediction system determines future state variables associated with the dynamic object based on a main explicit controller associated with the identified cluster. The determined future state variables provide future trajectories of the dynamic object. Additionally, the trajectory prediction system classifies the dynamic object into a designated category according to a threat level posed by the dynamic object with respect to navigation of the host vehicle in near real¬time based on the future state variables provided by the main explicit controller and one or more designated rules. The trajectory prediction system configures allocation of a preferred portion of an associated computing power for faster prediction of the future trajectories of the dynamic object (602D) and navigates the host vehicle in near real-time based on the future trajectories of the dynamic object. [0018] The determined state variables include a relative distance, a relative speed, and a relative orientation between the host vehicle and the dynamic object. The designated category associated with the dynamic object includes a high threat object, a medium threat object, or a low threat object. The computing power allocated for predicting future trajectories of the high threat object is greater than a respective computing power allocated for the medium threat object and the low threat object. The system further includes a training system deployed in a data-capturing vehicle including the system configured to capture the training data required to train the trajectory prediction system off-line. [0019] The trajectory prediction system is configured to obtain training data including motion patterns of a plurality of dynamic objects captured using on-board sensors of the data-capturing vehicle. Ordinary differential equations that model the motion patterns of the dynamic objects are generated. The motion patterns of the dynamic objects include corresponding historic state variables determined based on sensor signals acquired over a selected period of time using the on-board sensors associated with the data-capturing vehicle. Explicit controllers are generated for the generated ordinary differential equation models. Mean values of the historic state variables determined based on the sensor signals acquired over the selected period of time are determined. Random clusters of data points are generated in a mean 3-dimensional space based on the determined mean values of the historic state variables, the generated ordinary differential equation models, and the generated explicit controllers. Each of the randomly generated clusters includes a plurality of data points. [0020] Each data point in the randomly generated clusters corresponds to a dynamic object selected from the dynamic objects and includes a determined mean relative distance, a determined mean relative speed, and a determined mean relative orientation between the data-capturing vehicle and the dynamic object, an associated ODE model, and an associated explicit controller. The trajectory prediction system is configured to determine robustness gain margins of each of the explicit controllers in a designated cluster against a subset of ODE models in the designated cluster. An explicit controller is identified from the explicit controllers including largest sets of robustness gain margins and stabilizes all ODE models in the designated cluster. The identified explicit controller and an ODE model associated with the identified explicit controller are assigned as a main explicit controller and a main ODE model, respectively of the designated cluster. [0021] A respective threat level is assigned to each of the data points in the mean 3-dimensional space based on one or more designated rules and associated mean values of the state variables. The information generated off-line and including the generated clusters as polyhedral sets, the main explicit controllers associated with the generated clusters, and threat levels associated with the data points is stored in the database. The host vehicle corresponds to one or more of an autonomous vehicle, a semi-autonomous vehicle, a robot, a drone, an airplane, and a watercraft. DRAWINGS [0022] These and other features, aspects, and advantages of the claimed subject matter will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein: [0023] FIG. 1 is a block diagram illustrating an exemplary navigation system associated with a host vehicle, in accordance with aspects of the present disclosure; [0024] FIG. 2 is a schematic view illustrating a plurality of vehicles in which at least one of the vehicles includes the exemplary navigation system of FIG. 1, in accordance with aspects of the present disclosure; [0025] FIGS. 3A and 3B are flow diagrams illustrating an exemplary method for off-line training of a trajectory prediction system in the navigation system of FIG. 1, in accordance with aspects of the present disclosure; [0026] FIG. 4 is a graphical representation illustrating clusters of data points randomly generated by the navigation system of FIG. 1 during off-line training using the method depicted in FIGs 3A-3B, in accordance with aspects of the present disclosure; [0027] FIG. 5 is a table illustrating exemplary robustness gain margins of explicit controllers determined by the navigation system of FIG. 1 during off-line training, in accordance with aspects of the present disclosure; [0028] FIG. 6 is a block diagram illustrating a plurality of vehicles including the host vehicle having the navigation system of FIG. 1 for storing information that are generated during off-line training, in accordance with aspects of the present disclosure; and [0029] FIG. 7 is a flow diagram illustrating a method for navigation of the host vehicle of FIG. 1 by predicting future trajectories of dynamic obj ects in the surrounding environment of the host vehicle in near real-time, in accordance with aspects of the present disclosure. DETAILED DESCRIPTION [0030] The following description presents an exemplary system and method for navigation of a host vehicle by predicting future trajectories of dynamic objects in the surroundings of the host vehicle. Particularly, the embodiments presented herein describe a navigation system and an associated method for predicting future trajectories of surrounding dynamic objects for taking critical navigation decisions in near real-time with minimal computing power requirements. In one embodiment, the navigation system predicts future trajectories of the surrounding dynamic objects based on explicit model predictive control (MPC) systems. Further, the navigation system controls navigation of the host vehicle according to the predicted future trajectories of the surrounding dynamic objects to ensure safe maneuvering of the host vehicle. [0031] As noted previously, existing on-board systems deployed in host vehicles receive surrounding information from on-board sensors and process the received surrounding information in order to perform perception related functionalities such as identifying, tracking, estimating current positions, and predicting future trajectories for multiple moving objects, simultaneously. Such on-board systems require a significant amount of computing power to allow for making navigation decisions in real-time when a large number of dynamic objects are present in the surroundings of the host vehicles. Although, certain specialized or high-end vehicles may include devices with significant-computing capabilities, majority of the vehicles on the road do not include such sophisticated or high-performance systems. Absence of the ability to take critical safety-related decisions in real-time, thus, endangers the life and health of the occupants of the vehicle and surrounding objects. Unlike such existing systems, embodiments of the present navigation system does not need to perform computationally expensive trajectory prediction operations in real-time. Further, embodiments of the present navigation system need only limited computing power capabilities, yet can accurately predict and track trajectories of surrounding dynamic objects to ensure the safety of the vehicle, its occupants, and surrounding objects. [0032] To that end, the navigation system is pre-trained off-line to predict future trajectories of dynamic objects in the surroundings of a host vehicle without need to perform computationally expensive trajectory prediction operations in near real-time. The navigation system is provided with a database that stores information generated off-line, including pre-computed MPC solutions, based on which the navigation system predicts the future trajectories of the dynamic objects in near real-time. Further, since the navigation system predicts the future trajectories of the surrounding dynamic objects based on information already stored in the database, , the navigation system does not need to perform computationally expensive path prediction operations in real¬time. Therefore, embodiments of the present navigation system does not require high computing power capabilities. [0033] Nevertheless, the navigation system identifies and classifies the surrounding dynamic objects based on their motion behavior and outputs of pre-computed MPC solutions stored in the database in real-time using limited computing power. For example, the navigation system classifies the surrounding dynamic objects into a first category of objects whose paths are predictable, a second category of objects whose paths are unpredictable, and a third category of objects whose paths are highly unpredictable. Accordingly, the trajectory prediction system prioritizes utilization of available computing power more for predicting and tracking the second and third categories of objects over the first category of objects to further effectively utilize the available computing power. [0034] It may be noted that embodiments of the present navigation system may be used, for example in predicting future trajectories of dynamic objects in the surrounding environment of host vehicles and in planning safe navigation path for the host vehicles, for example, for autonomous vehicles, robots, drones, airplanes, and cruises. Although, the present navigation system may be implemented in various types of host vehicles, for clarity, the present disclosure describes an embodiment of the present navigation system with respect to autonomous vehicles, with reference to FIGS. 1 through 7. [0035] FIG. 1 is a block diagram (100) illustrating an exemplary navigation system (102) associated with a host vehicle (104). In one embodiment, the host vehicle (104) is an autonomous vehicle or a semi-autonomous vehicle. Examples of the host vehicle (104) include an automobile, a car, a truck, and a robot. In one embodiment, the navigation system (102) of the host vehicle (104) predicts future trajectories of dynamic objects (106A-N) in the surroundings of the host vehicle (104) based on information generated during off-line training of the navigation system (102), as described in detail with reference to FIG. 7. Further, the navigation system (102) controls navigation of the host vehicle (104) based on the predicted future trajectories of the dynamic objects (106A-N) to avoid collision with the dynamic objects (106A-N). [0036] Examples of dynamic objects (106A-N) located in the surrounding environment of the host vehicle (104) include one or more of pedestrians, cyclists, and other vehicles on the road. In certain embodiments, the navigation system (102) of the host vehicle (104) includes one or more on-board sensors (108), a trajectory prediction system (110), a control system (112), and a communication medium (114) that facilitates communications among other components such as the on-board sensors (108), the trajectory prediction system (110), and the control system (112). [0037] In one embodiment, as the host vehicle (104) navigates on the road, the on-board sensors (108) continuously sense and acquire information related to both stationary and dynamic objects located 360° around the host vehicle (104) at designated intervals of time. Examples of the on-board sensors (108) include one or more optical sensors (116) such as one or more cameras, and one or more radio detection and ranging (RADAR) systems (118). The on-board sensors (108) communicate the acquired sensor information to the trajectory prediction system (110) via the communication medium (114). Examples of the communication medium (114) include one or more of wired and wireless communication networks. [0038] In certain embodiments, the trajectory prediction system (110) receives the information acquired by the on-board sensors (108) via the communication medium (114). Upon receiving the acquired sensor information, the trajectory prediction system (110) identifies the dynamic objects (106A-N) in the surroundings of the host vehicle (104) by processing the acquired sensor information. Further, the trajectory prediction system (110) assigns a unique identification label to each of the identified dynamic objects (106A-N). [0039] In addition, the trajectory prediction system (110) determines state variables associated with each of the identified dynamic objects (106A-N) based on the acquired sensor information. As used herein, the term "state variables" refers to variables that define motion of the identified dynamic objects (106A-N) with respect to motion of the host vehicle (104). Examples of the state variables include relative distances, relative speeds, and relative orientations between the host vehicle (104) and the identified dynamic objects (106A-N). [0040] In one embodiment, the traj ectory prediction system (110) stores the unique identification label assigned to each of the identified dynamic objects (106A-N) and their corresponding state variables determined based on sensor information captured at a particular instant of time in an associated database (120). Further, as the host vehicle (104) continues to navigate on the road, the on-board sensors (108) acquire updated sensor information at designated intervals of time based on which the trajectory prediction system (110) continuously updates the database (120) with the state variables of the identified dynamic objects (106A-N). [0041] Moreover, in certain embodiments, the trajectory prediction system (110) stores information associated with the dynamic objects (106A-N) in the database (120) only for a particular time period. For example, the trajectory prediction system (110) stores state variables' information acquired during the previous 10 minutes and automatically removes state variables' information older than previous ten minutes from the database (120) for optimizing storage capacity of the database (120) and for quick update and extraction of information from the database (120). Further, more specifically, the trajectory prediction system (110) of the navigation system (102) predicts the future trajectories (104A-D) of the identified dynamic objects (106A-N) based on the determined state variables and explicit MPC solutions stored in the database (120), as described in detail with respect to FIG. 7. [0042] In one embodiment, the trajectory prediction system (110) communicates the predicted future trajectories information to the control system (112) via the communication medium (114). The control system (112) receives the predicted future trajectories information and performs one or more control actions for avoiding collision of the host vehicle (104) with the identified dynamic objects (106A-N). Examples of the control actions include steering the host vehicle (104), applying brakes associated with the host vehicle (104) for minimizing the speed of the host vehicle (104), accelerating the speed of the host vehicle (104), and changing a travel lane of the host vehicle (104). [0043] In certain embodiments, the navigation system (102), the control system (112) and the trajectory prediction system (110) may be implemented by suitable code on a processor-based system, such as a general-purpose or a special-purpose computer. Accordingly, the navigation system (102), the control system (112) and the trajectory prediction system (110), for example, include one or more microcontrollers, general-purpose processors, specialized processors, graphical processing units, microprocessors, programming logic arrays, field programming gate arrays, and/or other suitable computing devices. [0044] In one embodiment, the navigation system (102) is configured to perform various critical functionalities needed for safe navigation of the host vehicle (104) without need to actually perform computationally expensive sensor-information processing operations in near real-time. As noted previously, one such critical functionality includes prediction of the future trajectories (104A-D) of the dynamic objects (106A-N). Another critical functionality executed by the navigation system (102) includes classification of the identified dynamic objects (106A-C) based on a threat level posed by the identified dynamic objects (106A-C) in near real-time with respect to navigation of the host vehicle (104). [0045] The navigation system (102) identifies the threat level associated with each of the identified dynamic objects (106A-C) based on corresponding future state variables provided by the explicit MPC solutions stored in the database (120). In addition, the navigation system (102) optimizes associated computing power capabilities based on the identified threat level. For example, the navigation system (102) allocates comparatively more computing power for predicting and tracking trajectories of vehicle posing high threat over predicting and tracking trajectories of vehicles posing low threat. [0046] In certain embodiments, the navigation system (102) of the present disclosure needs to be trained off-line with training data before deployment in real-world systems such as the host vehicle (104). Off-line training of the navigation system (102) is crucial for configuring the navigation system (102) to predict future trajectories of surrounding dynamic objects (106A-N) in real-world scenarios without need to perform computationally expensive trajectory prediction operations in near real-time. [0047] Unlike existing navigation systems in which all traj ectory prediction related operations, occur in real-time. For example, as a host vehicle navigates on the road, the existing navigation systems receive enormous amount of sensor information from on-board sensors in near real-time. Further, the existing navigation systems process the sensor information and implement a system model to compute future trajectories of surrounding dynamic objects in real-time. Examples of such system models include a monte-carlo based system model, a pattern based system model, etc. Though, certain existing navigation systems compute the future trajectories of the surrounding dynamic objects, these existing navigation systems require high computing power as all trajectory prediction related operations such as processing of sensor information, implementing system models, etc., need to be performed in near real-time. [0048] However, unlike such existing navigation systems, embodiments of the present navigation system (102) do not need to compute the future trajectories of the dynamic objects (106A-N) in real-time or near real-time. The navigation system (102) itself pre-stores the future trajectories of the dynamic objects (106A-N). More specifically, the navigation system (102) pre-stores explicit MPC solutions generated during off-line training of the navigation system (102). In order to predict the future trajectories of the dynamic objects (106A-N) in real-world scenarios, the navigation system (102) determines mean values of state variables associated with the dynamic objects (106A-N) based on information captured using the on-board sensors (108). Further, the navigation system (102) retrieves explicit MPC solutions from the database (120) corresponding to the determined mean values of the state variables. The retrieved explicit MPC solutions provide the future trajectories of the dynamic objects (106A-N). Thus, the navigation system (102) predicts the future trajectories of the dynamic objects (106A-N) based on information-generated off-line including explicit MPC solutions and not based on real-time computationally expensive trajectory prediction operations. Hence, embodiments of the present navigation system (102) do not necessarily need high computing power capabilities. [0049] Further, in comparison to some of existing navigation systems that predict future trajectories of the dynamic objects (106-A-N) based on information-generated off-line, embodiments of the present navigation system (102) additionally classify the dynamic objects (106A-N) into various categories based on threat levels posed by the dynamic objects (106A-N) for safe navigation of the host vehicle (104). In one embodiment, the navigation system (102) identifies a threat level associated with each of the dynamic objects (106A-N) based on future state variables provided by corresponding explicit MPC solutions. [0050] The navigation system (102), for example, classifies the dynamic objects (106A-N) into a first category of objects that pose high threat to navigation of the host vehicle (104), and a second category of objects that pose medium threat to navigation of the host vehicle (104). The navigation system (102) further classifies the dynamic objects (106A-N) into a third category of objects that pose low threat to navigation of the host vehicle (104). According to aspects of the present disclosure, the navigation system (102) prioritizes utilization of available computing power for predicting and tracking the first category of objects over the second and third categories of objects to further effectively utilize the available computing power. [0051] As noted previously, in one embodiment, the navigation system (102) of the present disclosure is trained off-line with training data before deployment in real-world systems such as the host vehicle (104) for computationally inexpensive trajectory prediction. To that end, in certain embodiments, a training system (not shown in FIGS) collects the training data needed to train the navigation system (102). The training system is deployed in a data-capturing vehicle having the navigation system (102), and collects the training data by configuring the data-capturing vehicle to navigate on the road for a designated period of time, as depicted in FIG. 2. [0052] FIG. 2 is a schematic view illustrating a plurality of vehicles (202A-N) in which at least one vehicle (202A) includes the exemplary navigation system (102) of FIG. 1. The vehicle (202A) having the navigation system (102) is configured to collect the training data needed to train the navigation system (102). In one embodiment, the vehicle (202A) is an autonomous vehicle. [0053] In certain embodiments, the vehicle (202A) is configured to navigate on the road for a designated period, for example, from time 'ti' to 'tn' for collecting the training data. During navigation of the vehicle (202A), the on-board sensors (108) identify and capture motion patterns of dynamic objects (202B-N) in the surrounding environment of the vehicle (202A). The motion patterns, thus captured using the on- board sensors (108) for the designated period, are used as the training data for training the trajectory prediction system (110) off-line, as described in detail subsequently with reference to FIGS. 3 A and 3B. [0054] In one embodiment, the captured motion patterns associated with each of the dynamic objects (202B-N) include historic state variables. For example, the motion patterns associated with the dynamic object (202B) include relative distances, relative speeds, and relative orientations between the vehicle (202A) and the dynamic object (202B) captured at designated intervals from time 'ti' to 'tn' when the dynamic object (202B) has been within field of view of the on-board sensors (108) for the entire designated period. [0055] In another example, the vehicle (202A) may encounter and identify the dynamic object (202C) at time 'tx' after few seconds from a start time 'ti' of the vehicle (202A). In this instance, the motion patterns associated with the dynamic object (202C) include relative distances, relative speeds, and relative orientations between the vehicle (202A) and the dynamic object (202C) captured at designated intervals from time 'tx' to 'tx+n'. Thus, the captured motion patterns associated with the dynamic objects (202B-N) include time series of historic state variables between the vehicle (202A) and corresponding dynamic objects (202B-N). [0056] Upon collecting the motion patterns associated with the dynamic objects (202B-N), the trajectory prediction system (110) of the navigation system (102) is trained off-line in order to configure the trajectory prediction system (110) to efficiently predict the future trajectories (104A-D) of the dynamic objects (106A-N) in near real-time with minimal computing power requirements. An associated method for training the trajectory prediction system (110) off-line based on the collected motion patterns of the dynamic objects (202B-N) is described subsequently with reference to FIGS. 3 A and 3B. [0057] FIGS. 3A and 3B depict a flow diagram illustrating an exemplary method (300) for off-line training of the trajectory prediction system (110) of the navigation system (102) of FIG. 1 based on the captured motion patterns of the dynamic objects (202B-N). The order in which the exemplary method (300) is described is not intended to be construed as a limitation, and any number of the described blocks may be combined in any order to implement the exemplary method disclosed herein, or an equivalent alternative method. Additionally, certain blocks may be deleted from the exemplary method or augmented by additional blocks with added functionality without departing from the spirit and scope of the subject matter described herein. [0058] The method begins at step (302), where the training data including the motion patterns of the dynamic objects (202B-N) are obtained. In one embodiment, as noted previously, the on-board sensors (108) of the vehicle (202A) capture the training data including the motion patterns of the dynamic objects (202B-N). [0059] At step (304), the trajectory prediction system (110) models the motion patterns associated with each of the dynamic objects (202B-N) using an ordinary differential equation (ODE). For example, the trajectory prediction system (110) models the motion patterns of the exemplary dynamic object (202B) in accordance with an exemplary ODE equation (1) and further in accordance with equations (2), (3), and (4). \dot{x} = Ax + Bu (1) \dot{dst} = dst (2) \dot{spd} = uacc (3) \dot{ang} = ang (4) where 'A' and 'B' correspond to matrices, x = [dst.spd.ang] corresponds to state variables associated with the dynamic object (202B) determined using the on-board sensors (108), u = [uacc] corresponds to control actions such as acceleration or deceleration actions performed by the dynamic object (202B). Further, the 'dst' 'ang', and 'spd' correspond to relative distances, relative orientations, and relative speeds, respectively between the vehicle (202A) and the dynamic object (202B). [0060] Similarly, it is to be understood that the trajectory prediction system (110) also models the motion patterns of other dynamic objects (202C-N) and generates a corresponding ODE model in accordance with equations (1), (2), (3), and (4). In one embodiment, for example, the trajectory prediction system (110) employs linear grey-box estimation models for improving accuracies of the generated ODE models. [0061] At step (306), the trajectory prediction system (110) transforms the generated ODE models associated with the dynamic objects (202B-N) into a respective discrete-time linear state-space model for defining an MPC problem. For example, the trajectory prediction system (110) transforms the generated ODE model associated with the dynamic object (202B) represented by the equation (1) into the discrete-time linear state-space model represented by equation (5). x(K + 1) = Ax(k) + Bu(k) (5) where x(/c) G Ed corresponds to a vector of the state variables associated with the dynamic object (202B), u(k) G Em corresponds to a vector of the control actions performed by the dynamic vehicle (202B). Further, d and m correspond to number of the state variables and the control inputs, respectively. [0062] At step (308), the trajectory prediction system (110) generates a corresponding quadratic optimization problem for MPC problem for each transformed discrete-time linear state-space model. For example, the trajectory prediction system (110) generates the quadratic optimization problem represented by an equation (6) corresponding to the discrete-time linear state-space model associated with the dynamic object (202B), which is represented using the equation (5), m™x(K + N\k)Px(k + N\k) + I1J=01xT(k + j\k)Qx(k +j\k) + uT (k + j\k)Ru(k + j\k) (6) subjectto: x(k + j\k) = Ax(k +j- l\k) + Bu(k +j - l\k) (7) u{k+j\k) G j = k, ...,k + JV- 1 (8) x(k+j\k) G j = k, ...,k + N- 1 (9) x(k+N\k) E&X (10) where 'N' corresponds to a prediction horizon, 'k' corresponds to a current time instant, Q G Edxd represents weights associated with the state variables of the dynamic object (202B), and R G Emxm represents weights associated with the control actions performed by the dynamic vehicle (202B). Further, variables, x(k +j\k) and u(k + j\k) represent predicted state variables and control inputs associated with the dynamic object (202B), respectively at time 'k'. Moreover, \mathcal{U}, \mathcal{X}, and \mathcal{X}N correspond to an input constraint set, a state constraint set, and an invariant set, respectively. In addition, 'P' represents a terminal weight that is selected as a solution of an algebraic riccati equation provided by matrices 'A' and 'B' and weights associated with 'R' and 'Q'. [0063] At step (310), the trajectory prediction system (110) transforms the generated quadratic optimization problems in MPC problems associated with the dynamic objects (202B-N) into a corresponding multi-parametric quadratic programming (mp-QP) problem. For example, the trajectory prediction system (110) transforms the generated quadratic optimization problem corresponding to the dynamic object (202B) into an mp-QP in accordance with, for example, equation (11) with respect to a transformed constraint set provided by, for example, equation (12). /(x, U) = m™ \ UTHU + xTFU + xTYx (11) where x = x(k + j\k) and U = [u(k + j\k),u(k + j\k),...,u(k + N - l\k)]T GU

Documents

Application Documents

# Name Date
1 201841044149-STATEMENT OF UNDERTAKING (FORM 3) [23-11-2018(online)].pdf 2018-11-23
2 201841044149-REQUEST FOR EXAMINATION (FORM-18) [23-11-2018(online)].pdf 2018-11-23
3 201841044149-POWER OF AUTHORITY [23-11-2018(online)].pdf 2018-11-23
4 201841044149-FORM 18 [23-11-2018(online)].pdf 2018-11-23
5 201841044149-FORM 1 [23-11-2018(online)].pdf 2018-11-23
6 201841044149-FIGURE OF ABSTRACT [23-11-2018].jpg 2018-11-23
8 201841044149-DRAWINGS [23-11-2018(online)].pdf 2018-11-23
9 201841044149-DECLARATION OF INVENTORSHIP (FORM 5) [23-11-2018(online)].pdf 2018-11-23
10 201841044149-COMPLETE SPECIFICATION [23-11-2018(online)].pdf 2018-11-23
11 Correspondence by Agent_ Form1-General Power of Attorney_04-12-2018.pdf 2018-12-04
12 201841044149-FER.pdf 2020-07-02
13 201841044149-OTHERS [25-12-2020(online)].pdf 2020-12-25
14 201841044149-FORM-26 [25-12-2020(online)].pdf 2020-12-25
15 201841044149-FORM 3 [25-12-2020(online)].pdf 2020-12-25
16 201841044149-FER_SER_REPLY [25-12-2020(online)].pdf 2020-12-25
17 201841044149-DRAWING [25-12-2020(online)].pdf 2020-12-25
18 201841044149-CORRESPONDENCE [25-12-2020(online)].pdf 2020-12-25
19 201841044149-CLAIMS [25-12-2020(online)].pdf 2020-12-25
20 201841044149-US(14)-HearingNotice-(HearingDate-14-06-2023).pdf 2023-06-01
21 201841044149-FORM-26 [06-06-2023(online)].pdf 2023-06-06
22 201841044149-Correspondence to notify the Controller [06-06-2023(online)].pdf 2023-06-06
23 201841044149-Written submissions and relevant documents [23-06-2023(online)].pdf 2023-06-23
24 201841044149-PatentCertificate02-08-2023.pdf 2023-08-02
25 201841044149-IntimationOfGrant02-08-2023.pdf 2023-08-02

Search Strategy

1 149E_29-06-2020.pdf

ERegister / Renewals